Grid optimization algorithms

Grid-Based Optimization Algorithm and adaptive grid algorithm multi objective optimization
Dr.NaveenBansal Profile Pic
Dr.NaveenBansal,India,Teacher
Published Date:25-10-2017
Your Website URL(Optional)
Comment
A Grid-Based Optimization Algorithm for Parameters Elicitation in WOWA Operators: An Application to Risk Assesment Marta Cardin and Silvio Giove Department of Economics, Ca’ Foscari University of Venice, Venice, Italy sgioveunive.it Abstract. In this paper we propose a joint grid-based and stochastic search for parameters elicitation in the case of WOWA aggregation func- tions. The method uses a grid search approach to determine the param- eter of a monotonic quantifier, and for each of the values, a stochastic search in the space of the criteria weights minimizes the sum of the quadratic error between the computed and the real output of a learning set. A simulated application is proposed in the case of transportation risk assessment, using an ad hoc questionnaire applied to risk matrices. Keywords: Risk assessment, OWA, WOWA. 1 Introduction Criteria aggregation is widely used in applied sciences, as the process by which different data have to be aggregated into a single positive number, like signals, sensors, but also human judgements - each of them normalized into a common scale 6. The specialized literature provides many different approaches for ag- gregation, starting from the simplest ones, the Weighted Averaging (WA) up to other more sophisticated, like Generalized Means, Ordered Weighted Averaging (OWA), Weighted Ordered Weighted Averaging (WOWA), fuzzy measures, and other ones 2, 21. Everymethod requiresthe assessmentof someparameters,for instance, in the case of WA the weights of the criteria need to be elicited, representing the trade- off between the criteria themselves. Given a collection of input-output data, this is usually done considering a mathematical optimization problem. WA is unable to represent interactions among the criteria, as synergies and conflicts, because WA is a complete compensative method. For this reason, other more general ap- proaches were proposed, as the one based on fuzzy measures, see Section 2 below.Wenotethatsuchmethodsrequiremanyparameterstobeelicited,increas- ingthecomputationalcomplexity.Thustheliteratureproposedotherapproaches, which include interactions among the criteria but, at the same time, reduce the numerical complexity. Among them, we focus the attention on OWA introduced by Yager, 15, 16, 17 , 18, 19 and on the extension of OWA, the WOWA ag- gregation operators, introduced by Torra, 12, 13. As for the WA case, OWA  c Springer International Publishing Switzerland 2015 207 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_20208 M. Cardin and S. Giove weightscanbedirectlyelicitedfromaninput-outputdatacollection,usingmath- ematicalprogramming,butthesameapproachcannotbeappliedfortheWOWA, seeSection3.Forthisreason,weproposeaninnovativeelicitationapproach,based ontwophases, agrid-basedapproachand astochasticsearchmethod. In the first phase the characterizingparameter of a Monotone Quantifier moves along a pre- determinedgrid,whileinthesecondphase,foreachvalueofthisparameterthecri- teriaweightsaresampledfromanuniformdistribution.Ateachiteration,westore thequadraticerrorbetweenthecomputedWOWAandtherealoutputvalue.The best parameter values are selected as the ones which minimizes the quadratic er- ror.Thealgorithmisappliedtoanenvironmentalriskassessmentproblem,where different types of risk need to be aggregated,as described in 10. In this case, we proposedasuitablequestionnairetobe fulfilled bya(simulated)DecisionMaker, which has to assign a score to every scenarioin the questionnaire, as described in Section4.Theoptimizationproblemissolvedusingthetwophasesapproach,and the numerical results showed good approximation results between the computed WOWAandtherealoutput(questionnairescores).Thesameapproachcanbeap- plied to any other WOWA elicitation parameters if a data collection is available. The paper is structured as follows. Section 2 introduces Aggregation Oper- ators, mainly focusing on OWA and WOWA operators. Section 3 reports the parameters identification algorithm, based on a grid search and on Monte-Carlo simulation. In Section 4 is reported an application of the algorithm to a risk as- sessment problem. The last Section concludes and presents possible extensions and future work. 2 Aggregation Functions; OWA and WOWA Aggregation Operators (AO) are widely used in many problems where different numerical values coming from different sources need to be aggregated together in order to obtain an overall, single output, see 1 2 6, 21. An AO is a n monotone and idempotent function: F :0,1 → 0,1, such that F(0,0,...,0) = 0,F(1,1,...,1) = 1 (border conditions). Every AO requires some parameters to be elicited, that is not an easy task, with exception of few simple cases. Using a notation similar to 7, the WA operator is defined as follows: n  WA (x ,x ,...,x )= p x (1) p 1 2 n i i i=1 being x the normalized value of the criteria, i=1,2,...,n,and p a weights, i i  n p ≥ 0, p = 1. The weights represent the relative importance of the i i i=1 criteria, and need to be determined by direct assignment or using a learning procedure. The WA is compensative and homogeneous of degree 1, 7. In many applications compensativeness is not a desirable property, given that a low value of a criteria cannot be compensated by an high value of an other one; for instance in the field of sustainability evaluation, an economic development cannot be paid by a critical value of toxicity 11.A Grid-Based Optimization Algorithm for Risk Assesment 209 While WA is a complete compensative method, it cannot represents inter- actions among the criteria - synergies and conflicts. On the other side, other methods can overcame this limit. Among them, let us recall the Non Additive Measures (Fuzzy Measures)andthe Choquet integral, see 2,4, 8, a very gen- eral approach but requiring an huge number of parameters to be elicited. An other class of AO is the family of OWA operators, introduced by Yager 15, 16, 17, 18, 19. For a recent review of OWA operators see 20. 2.1 The OWA Operator An OWA operator is defined as follows 15, 16: n  w OWA (x ,x ,...,x )= w x (2) 1 2 n i σ(i) i=1 being σ a permutation of the index set 1,2,...,n such that x ≥ x ≥ σ(1) σ(2) ... ≥ x , thus it is linear w.r.t. the ordered values of the criteria. Modulating σ(n) the values of the weights, different AO can be obtained; if w =1,w =0,∀i = 1 i 2,...,n, the MAX operator is obtained, while if w =1,w =0,∀i=1,...,n−1, n i 1 we obtain the MIN operator. If w = ,∀i =1,...,n the OWA collapses to i n the simple averaging. Again, the median, the k-th order statistic, the Hurwicz operatorandmanyothersAOcanbeobtainedwithdifferentweightchoice.OWA operatorsarecontinuous,symmetricandhomogeneousofdegree1,andrepresent a special case of the Choquet integral w.r.t. a symmetric non additive measure 1, 18. An OWA operator can be andness-type (orness-type) if the aggregated value is more or less close to the minimum (maximum) of its arguments. The following orness index is defined as: n  n−1 w Orness(OWA )= w (3) i n−1 i=1 We obtain Orness= 1 in the optimistic case, corresponding to the MAX case, the logic quantifier At least one; while Orness=0,inthe pessimistic case) corresponding to the MIN case, the quantifier All. Different numerical procedure were proposed to elicit the OWA weights 1. If a collection of input-output data is available, linear or quadratic programming can be applied, as for the case of WA; apart the re-ordering of the input vector, the aggregation function is linear w.r.t. the weights. Alternatively, the parameters can be elicited using a Monotonic Quantifier, that is a a fuzzy linguistic quantifier, which can be associated to the term sets ”it exists”, ”most”, ”at least half”, ”for all” and so on 17, 19. More precisely, a Regular Increasing Monotonic Quantifier (RIM) is a continuous and monotone function Q:0,1 → 0,1 such that Q(0) = 0,Q(1) = 1, from which the OWA weights can be generated as follows: i i−1 w = Q( )−Q()(4) i n n210 M. Cardin and S. Giove The following parameterized family of RIM is widely used in applications, with α ≥ 0 16 19 2: α Q(z)= z (5) An andness behavior is obtained for α  0, while if α → +∞ there is a tendency to orness. Tuning the value of α we move from the extreme non compensative tendencytotothe oppositeone,a complete compensative behavior.Thusapossi- ble procedure to numerically optimize the α value consists into the computation of OWA with many different values of α, selecting the value which minimizes the quadraticaverageerror;this phasewillbe better detailed inthe next Section. 2.2 Extending OWA to WOWA An extension of OWA operator was proposed in the recent literature, in such a way to include even the importance criterion weight, given that an OWA operator cannot implement the WA but only simple averaging. The result is a new operator, the Weighted OWA (WOWA), which uses two different sets of weights, the first referring to the criteria weights (importance weights), while the second refers to the order of the criteria (OWA weights), 12 13. We limit to note that given a Monotonic Quantifier Q(z) and a set of weights p ,the i corresponding WOWA operator can be computed as follows: n  Q WOWA (x ,x ,...,x )= w x (6) 1 2 n i σ(i) p i=1 being σ a suitable permutation of the index set (1,2,...,n) such that x ≥ σ(1) x ≥ ... ≥ x and: σ(2) σ(n) i i−1   w = Q( p )−Q( p)(7) i σ(j) σ(j) j=1 j=1 This way, both the relative importance of the criteria and the tendency to opti- mism canbetakenintoaccount.FinallyletusobservethatwhileOWAoperators correspond to symmetric non additive measures, this is not necessarily true for WOWA 12, 13. 3 The GBS Approach for WOWA Parameters Elicitation As above pointed to, the OWA weights can be elicited from an input-output data collection using quadratic or linear programming 15, 16, 3, 20. This is not possible for WOWA operator, since by definition, the problem of weightsA Grid-Based Optimization Algorithm for Risk Assesment 211 elicitation is NP-hard in function of the number of criteria, splitting the prob- n lem into 2 sub-problems, each of them subjected to one of all the possible order of the criteria. Then the the numerical complexity can be unacceptable, and for this reason we propose a two-phases approach based on a grid search and a stochastic optimization, the GBS algorithm (Grid-based and Stochastic 1 optimization), partially following what suggested for a CES function 5. Using a suitable questionnaire, a learning set is defined, formed by a collection of K input-output data (scenario); the cardinality of each scenario is n+1.Afterhav- ing defined a grid for the parameter α, we run many sub-iteractions, randomly extracting the values of the n weights p from an uniform distribution in 0,1. i In each sub-interaction, the WOWA aggregated values are computed for each scenario in the learning set. The averagequadraticerror,obtained averagingthe squaredifference between the WOWA computed value and the corresponding output in the learning set, is then minimized, storing at each interaction the best value of the parameters (α and p ). The procedure is repeated until all the grid values are considered. i The algorithm is structured as follows: INPUT DATA 1) LS = x (k),x (k),...,x (k);y(k),k=1,...,K: learning set (x (k): input 1 2 n i values, i=1,..,n, y(k): output value); 2 2) Gr = α ,j=1,...Ng: vector containing the values of α, the grid elements ; j iii) N : number of iteractions for each value of α. it OUTPUT DATA ∗ ∗ 1) α ,weight : optimal values of parameters α and criteria weight 2) Error: absolute average error between computed and learning data GBS ALGORITHM a) Set Error = ∞ b) ∀α ∈ GR: c) ∀i=1,..,N it d) p ← U0,1,j=1,..,n i α e) Compute Q (j),j=1,..,n Q f) Compute Agg(k)=WOWA (x (k),x (k),...,x (k)),∀k=1,..,K: 1 2 n p  K 1 g) Compute = Agg(k)−y(k) k=1 K ∗ ∗ h) If Error then Error← , α ← alpha, weight (i) ← p ,i=1,..,N i it 1 A CES is a Constant Elasticity of Substitution function. j 2 In the numerical simulations a constant-step grid has been used, that is α = . j Ng212 M. Cardin and S. Giove 4 Transport Risk Evaluation; Simulated Questionnaire and Parameters Elicitation Environmental risk assessment requires the aggregation of different sources and types of risk. Among other, let us quote a recent case study in transport risk as reported by 10, where four different types of risk have to be aggregated into a single Risk Index. In the quoted papers the Authors applied different methods, from simple averaging up to Generalized Mean, comparing the obtained results, but no parameter optimization was carriedon. Anywise risk aggregation,is mul- tidimensional, and a preference structure is required, normally elicited from one or a set of Expert(s). In this contribution we develop the elicitation of the pref- 3 erence structure using a suitable questionnaire to be fulfilled by one Expert , including the personal (subjective) attitude to risk, that is the way of thinking of the representative Expert. We suppose that the Expert’s preference structure can be represented by an WOWA operator, a very general aggregation tool, see above. Basing on the quoted paper, the available information is formed by four risk matrices, 10 and the references therein. The risk matrices refers to four types 9of risk, Assets, Reputation, People, Environment (A,R,P,E for brevity). For each of the four types, a two-entries matrix is assigned, whose dimensions are Likelihood and Severity of the damage. The Likelihood is partitioned into five linguistic labels, Improbable, Remote, Occasional, Probable,Frequent, while the Severityis defined byNone, Negligible, Minor, Moderate, Significant, Severe. Each cell of the matrix contains a numerical evaluation of the risk, a number as- sociated to the risk evaluation, in a common scale, say 1,10 where 1 corresponds to the null risk, and 10 to the maximum risk, the two extreme situations, the best and the worst. After having assigned these values for the four risk matrices, every spatial region is characterized by a four-dimensional vector whose elements are the four risk assessment values. This way, given a risk scenario, thgat is a couple Likelihood-Severity for each of the four risk types, a single cell is activated for each of the four matrix. For instance the vector (2,10,1,5) corresponds to the case where there is a Low risk for Assets, a Very High risk for Reputation,a Null Risk for People, and a Medium risk for Environment. Givenascenario,thefourriskvaluesaretobeaggregatedintoasinglenumber. In 10 this problem is approachedusing differenttypes of GeneralizedMean, but in the quoted paper no suggestion is given to the parameters optimization. A part the case of dominance, there is no an objective way to put the single risk values together, thus the aggregation operator (WOWA) needs to reflects the preference structure of the Expert. For this reason, we propose a method that uses a preference structure implicitly obtained by a questionnaire designed ad hoc, submitted to the Expert. Using the obtained answers, the GBS algorithm wasapplied to optimize the parametersofthe WOWAoperator.The structure of 3 Normally a set of Expert should be considered applying Multi-Experts Decision Making and consensus analysis. Nevertheless this is not the focus of our proposal; namely we consider a representative Expert.A Grid-Based Optimization Algorithm for Risk Assesment 213 the questionnaire is described below. The following Section reports the structure of the questionnaire, together with some numerical text showing the satisfactory properties of the GBS algorithm in the case study. 5NumericalTests This Section reports the results of some simulated scenarios, formed by a set of questions, each of them referring to the four risk types. The last column is the DM answer in the same scale (1,10) with 1 corresponding to Null (global) risk, 3 to Low Risk, and so on. The structure of the questionnaire is presented in the Table1.EachrowintheTablerepresentsascenario,that isasimulatedquestion, thus 9 scenarios are consideredat all. The first four columns report the instances of the four considered risk types (A,R,P,E) in the normalized scale (1,10), and the three last columns a couple of numbers in parenthesis, in order the values fulfilled by the Expert, and the values obtained by the algorithm. Each of the last three columns refers to three different simulated DMs, each of them more or less andness-oriented. In the first case we simulated a linear Expert with equal weights.In the secondcasewe maintainequal importanceweights,but simulated an andness-oriented Expert, while in the last text we modified the weights but with same andness as for the second DM. The results obtained by the GBS algo- rithm showed good correspondence with the data fulfilled by each Expert. The obtained parameters reflected the preference structure with satisfactory preci- sion. Moreover, the WOWA operator exhibits good generalization capabilities, able to represent the non linear relationship of simulated DM, at least for the simulated cases.Threesituations areconsidered:aLinear DM with equalimpor- tance weights (column n. 5), a Conjunctive DM with equal importance weight (column n. 6) and finally a Conjunctive DM with unequal importance weight (column n. 7); in this case, the weights were (0.27,0.13,0.07,0.53). Thus, for instance,thecouple(1.45,1.43) in the last column and row n. 5 means that the DM assigned to the 5 − th scenario the value 1.45 and the algorithm result is 1.43, for the Conjunctive DM with unequal weights. In the three cases, the value of α were (1,0.05,0.5) respectively. Table 1. Questionnaire structure, simulated and computed output in three cases (R1) (R2) (R3) (R4) Linear Conjunctive E.W. Conjunctive NOT E.W. 1 1 5 10 (4.25,4.23) (1.21,1.21) (1.37,1.35) 10 2 1 2 (3.75,3.83) (1.18,1.18) (1.25,1.26) 7 3 2 8 (5.00,4.98) (2.22,2.22) (2.47,2.47) 5 4 6 7 (5.50,5.52) (4.12,4.12) (4.18,4.18) 1 8 10 8 (6.75,6.70) (1.50,1.51) (1.45,1.43) 8 7 1 1 (4.25,4.22) (1.22,1.21) (1.17,1.18) 2 1 3 1 (1.75,1.80) (1.05,1.05) (1.02,1.03) 2 1 3 1 (4.75,4.73) (4.05,4.05) (4.02,4.02) 4 6 5 4 (7.00,7.04) (6.08,6.08) (6.10,6.10)214 M. Cardin and S. Giove The relative absolute errors (between the real and the computed output) are respectively (0.007,0.001,0.003), clearly negligible in all the considered cases. 6Conclusion The WOWA operator extends the OWA tool considering also the importance of the criteria, thus combining both the cardinal properties of an averaging oper- ator with the ordinal properties of OWA. For this reason, WOWA can be well tailored for many application problems in the field ofDecision Theory. Neverthe- less, few efforts were devoted, at our knowledge, in the parameters elicitation. In this contribution we proposed a combined grid-search and stochastic opti- mization algorithm (GBS algorithm), based on implicit knowledge elicitation, with the aim to obtain both the importance weights and the characterizing parameter of a Monotone Quantifier, inferring the preference structure of the Decision Maker using an ad hoc questionnaire. This approach has been applied to a transportation risk assessment, but it could be applied to other type of risk. Some preliminary texts confirmed the goodness of the method. In a future work, the algorithm will be extended to the case whereapriori information will be available, like the distribution of the importance weights, which could be pre- liminarily inferred from the Decision Maker. Again, we intend to apply the GBS algorithm to Group Decision Theory, combining judgements of several Experts, partially following what propose by 9, 22 in the field of OWA operator. References 1. Beliakov, G., Pradera, A., Calvo, T.: Aggregation Functions: a guide for practi- tioners. STUDFUZZ, vol. 221. Springer, Heidelberg (2007) 2. Calvo, T., Mayor, G., Mesiar, R.: Aggregation Operators: new trends and applica- tions. STUDFUZZ, vol. 91. Springer, Heidelberg (2002) 3. Filev, D., Yager, R.R.: On the issue of obtaining OWA operator weights. Fuzzy Sets and Systems 94, 157–169 (1998) 4. Grabisch,M.,Roubens,M.:ApplicationoftheChoquetintegralinmulticriteriade- cision making. In: Grabisch, M., Murofushi, T., Sugeno, M. (eds.) Fuzzy Measures and Integrals Theory and Applications, pp. 348–374. Physica Verlag (2000) 5. Henningsen, A., Henningsen, G.: Om estimation of the CES production function - Revisited. Economic Letters 115, 67–69 (2012) 6. Klement,E.P.,Mesiar,R.,Pap,E.:TriangularNorms.KluwerAcademicPublishers (2000) 7. Llamazares, B.: On generalization of weighet means and OWA operators. In: Proc. of EUSLAT-LFA 2011 (2011) 8. Marichal, J.: An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria. The IEEE Transactions on Fuzzy Systems 8(6), 800–807 (2000) 9. Palomares, I.: Applying OWA Operators and RIM Quantifiersto measure Consen- sus with Large Group of Experts. In: Proc. 6th International Summer School on Aggregation Operators - AGOP 2011 (2011)A Grid-Based Optimization Algorithm for Risk Assesment 215 10. P´erez, R., Alonso, P., Daz, I., Montes, S.: Aggregation of Risk Assesment Matrices for Umnao Reliability in Transportation Systems. In: De Baets, B., Fodor, J., Montes, S. (eds.) Proc. of EUROFUSE20- Uncertainty and Imprecision Modelling in Decision Making, pp. 225–232. Ediciones de la Universidad de Oviedo (2013) 11. Pinar, M., Cruciani, C., Giove, S., Sostero, M.: Constructing the FEEM sustain- ability index: A Choquet integral application. Ecological Indicator 39, 189–202 (2014) 12. Torra, V., Lv, Z.: On the WOWA operator and its interpolation function. Interna- tional Journal of Intelligent Systems 24, 1039–1056 (2009) 13. Torra, V.: The weighted OWA operator. International Journal of Intelligent Sys- tems 12, 153–166 (1997) 14. Torra, V., Narukawa, Y.: Modeling Decisions. Springer (2007) 15. Yager, R.R.: On ordered weighted averaging aggregation operators in multicriteria decisionmaking.IEEETrans.onSystems,ManandCybernetics18,183–190(1988) 16. Yager, R.R.: Applications and extensions of OWA aggregation, Internat. Journal Man Machine Studies 37, 103–132 (1992) 17. Yager, R.R.: Families of OWA operators. Fuzzy Sets and Systems 59, 125–148 (1993) 18. Yager,R.R.,Kacprzyk,J.(eds.):TheorderedWeightedAveragingOperators,The- ory and Applications. Kluwer Academic Publisher (1997) 19. Yager, R.R.: Quantifier guided aggregation using OWA operators. International Journal of Intelligent Systems 11, 49–73 (1996) 20. Yager, R.R., Kacprzyk, J., Beliakov, G. (eds.): Recent Developments in the Or- dered Weighted Averaging Operators: Theory and Practice. STUDFUZZ, vol. 265. Springer, Heidelberg (2011) 21. Xu, Z.S., Da An, Q.L.: overview of operators for aggregation informations. Inter- national Journal of Intelligent Systems 18, 953–969 (2003) 22. Wang, Y.M., Parkan, C.: A minimax disparity approach for obtaining OWA oper- ator weights. Inform. Sci. 175, 20–29 (2005)An Heuristic Approach for the Training Dataset Selection in Fingerprint Classification Tasks 1 1 2 3 Giuseppe Vitello , Vincenzo Conti , Salvatore Vitabile , and Filippo Sorbello 1 Faculty of Engineering and Architecture University of Enna Kore, Enna, Italy giuseppe.vitello,vincenzo.contiunikore.it 2 Department of Biopathology, Medical and Forensic Biotechnologies University of Palermo, Palermo, Italy salvatore.vitabileunipa.it 3 Department of Chemical, Management, Computer and Mechanic Engineering University of Palermo, Palermo, Italy filippo.sorbellounipa.it Abstract. Fingerprint classification is a key issue in automatic fingerprint iden- tification systems. It aims to reduce the item search time within the fingerprint database without affecting the accuracy rate. In this paper an heuristic approach using only the directional image information for the training dataset selection in fingerprint classification tasks is described. The method combines a Fuzzy C- Means clustering method and a Naive Bayes Classifier and it is composed of three modules: the first module builds the working datasets, the second module extracts the training images dataset and, finally, the third module classifies fin- gerprint images in four classes. Unlike literature approaches using a lot of train- ing examples, the proposed approach requires only 18 directional images per class. Experimental results, conducted on a consistent subset of the free down- loadable PolyU database, show a classification rate of 87.59%. Keywords: Fingerprint Classification, Directional Images, Fuzzy C-Means, Naive Bayes Classifier, Training Dataset Optimization. 1 Introduction The new emerging market of mobile users rapidly grows, influencing several scena- rios such as commercial, banking, and government applications. As result, secure access system design 1 and high response speed are currently the main issues. In this field, fingerprint authentication and classification systems represent a valid solution. The identification process performed in a database divided in classes is fast, since the number of the needed comparisons can be reduced by searching the fingerprint only in the same class of the database 2. Many fingerprint classification approaches are reported in literature based on macro features 3 4, structural information 5, neur- al networks 6 7 8, fuzzy-neural networks 9, probabilistic model 10 11, and so on. Unfortunately, macro features are not always present in fingerprint images, © Springer International Publishing Switzerland 2015 217 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_21 218 G. Vitello et al. for example due to image acquisitions not correctly performed such as in partial fin- gerprint 12; in this case, may be useful the approach proposed in 13 using pseudo- singularity points. In this paper, an heuristic approach to optimize the training phase in fingerprint classification tasks is proposed. It is inspired by the works described in 14 and 15, and it uses only the directional image information 3. Every element in the direction- al image represents the local orientation of the fingerprint ridges in the original gray- scale image (Figure 1). With more details, the proposed approach combines a Fuzzy C-Means clustering method and a Naive Bayes Classifier, and it is composed of three modules. The first module, Datasets Building, builds the working datasets; the second module, Training Dataset Extraction, extracts the dataset of training images; the third module, Fingerprint Classification, classifies a fingerprint image into one of the fol- lowing NIST standard classes: Left Loop, Right Loop, Tented Arch and Whorl (Plain Loop and Central Pocket Loop) 16. Unlike literature approaches using a lot of train- ing examples, the proposed one requires only the use of 18 directional images per class. Experimental results, conducted on a consistent subset of the free PolyU (Hong Kong Polytechnic University) database 17, show a classification rate of 87.59%. Fig. 1. Example of a fingerprint image with the related directional image The paper is structured as follow. Section 2 reports the main literature works on fingerprint image classification. Section 3 describes the proposed approach. Section 4 outlines the experimental results. Finally, conclusions and future works are reported in section 5. 2 Related Works Many classification approaches have been proposed in literature and many research- ers are still working in this field. Below, some of these works are briefly reported. Ballan et al. in 3 reduce image distortion and contrast, before computing the finger- print directional image on NIST database 16. Successively, they extract singular points and classify the fingerprint using topological and numerical considerations about these points. Maio and Maltoni in 5 compute a relational graph, summarizing the fingerprint macro-structure, from the segmentation of the directional image. The obtained graph is compared with model graphs in order to classify the fingerprint. They don’t say which database they used. Patil and Suralkar in 6 use a neural An Heuristic Approach for the Training Dataset Selection 219 network as decision stage. The network is ready to perform matching process and is successfully developed to identify and classify the fingerprint using back propagation algorithm. The presented methodology has been validated on a standard database of 800 images (they don’t say the database name) classified into six classes obtaining a classification rate of 80.2%. Mohamed and Nyongesa in 9 present a classification scheme based on the encoding of singular points (Core and Delta) together with their relative positions and directions. The image analysis is carried out in four stages: segmentation, directional image estimation, singular point extraction and feature en- coding. Successively, a fuzzy-neural network is used to implement the classification of input feature codes obtaining an average classification accuracy of 98.5% on NIST- 4 database. Jung and Lee in 10 use a probabilistic approach (Markov model) based on the ridge characteristics of fingerprint classes on FVC2000 DB1 18 and FVC2002 DB1 19 databases, while Senior in 20 describes a novel method of clas- sification based on hidden Markov models and decision trees to recognize the ridge structure on NIST-4 database. Pattichis et al. in 21 focus their work primarily on the image and feature enhancement and on finding improved classifiers, and less on the development of novel fingerprint representations. Using an AM–FM (Amplitude Modulated-Frequency Modulated) representation for each fingerprint of NIST database, they obtain significant gains in classification performance. 3 The Proposed Approach The proposed heuristic approach, inspired by the works described in 14 and 15, is an efficient and effective method to optimize the training phase in fingerprint classifi- cation tasks, using only the directional image information. It combines a Fuzzy C- Means clustering method and a Naive Bayes Classifier, and it is composed of three modules: Datasets Building, Training Dataset Extraction and Fingerprint Classifica- tion (Figure 2). Unlike literature approaches using a lot of training examples (e.g., in 7 authors use 30 images per class, while in 10 half of each class in the whole data- base), the proposed one requires only the use of 18 directional images per class. The following subsections describe the three modules. Fig. 2. The architecture of the proposed system 220 G. Vitello et al. 3.1 Datasets Building Module This module plays an important role because it requires the contribution of a domain expert to choose the Template dataset (from which extracting the best training set). It is composed of three sub-modules following described (Figure 3). Fig. 3. The proposed Datasets Building Module Gabor Filter Sub-module It is applied to all images of the used database to enhance their quality 22. Directional Image Extraction Sub-module It extracts the directional image in three steps: extraction of the direction for each pixel; processing of the previous step output assembling the pixels in 8x8 blocks; computing of the predominant direction for each block (in every 8x8 block, the direc- tion with greater frequency is attributed to the considered block). In this work, 8 di- rections have been chosen, from 0° to 180° as shown in Figure 4, and codified as a number in 0, 7. Fig. 4. The 8 directions used to build the directional image In order to extract the direction , of the point , a vector is calculated by the following equation (1): ∑ , 0. .7 (1) , where , and indicate the grey level of points , and ( ), respec- , , tively, while q=16 is the number of selected pixels along a considered direction. The direction is finally obtained as the position of the minimum value in the vector . However, acquisitions not correctly performed can affect the calculation of predomi- nant directions inside spoiled zones; therefore, a smoothing algorithm is applied. This is achieved by calculating the directional histogram, comparing the directions in areas An Heuristic Approach for the Training Dataset Selection 221 of 3x3 blocks: the direction of the central block is replaced by the higher frequency direction of the neighboring blocks. Datasets Construction Sub-module It builds three different datasets: the 150 images Template dataset requires a domain expert, since it is hand selected; the 100 images Validate dataset is randomly selected, following the common distribution in nature of fingerprint classes, and then it is hand divided into the considered four classes by the domain expert; the Test dataset con- sists of the remaining images of the original database. 3.2 Training Dataset Extraction Module It is composed of two sub-modules (Figure 5), following described, and works in cooperative way with the Fingerprint Classification Module. Fig. 5. The proposed Training Dataset Extraction Module Sets Construction Sub-module From the four clusters obtained applying the Fuzzy C-Means clustering method to the Template dataset, it builds 250 collections of 18 randomly selected images per cluster: 12 images near the cluster center and 6 images near the boundary. With more details, every boundary is identified calculating the Euclidean semi-distance among each cluster centers pair. Successively, for each collection, it builds 200 different sets, each of one composed of 3 groups of 6 randomly selected images per cluster (Figure 6). Finally, for each set, it creates 100 different set versions, adding one Validate image per group. Training Dataset Selection Sub-module It stores the accuracy rate of each set. Successively, it selects the one with the highest value over a threshold, experimentally fixed to 80%. The threshold is used to fix the number of items of the Validate sets and collections. 222 G. Vitello et al. 3.3 Fingerprint Classification Module It is composed of five sub-modules (Figure 7), following described, and works in cooperative way with the Fingerprint Classification Module only for the training dataset extraction. Fig. 6. The proposed set construction approach (the different colors represent the 4 clusters) Fig. 7. The proposed Fingerprint Classification Module An Heuristic Approach for the Training Dataset Selection 223 Fuzzy C-Means Sub-module It is composed of three components, each processing one group. Each component calculates five centroids: one centroid for the Test image and four centroids for the Training images (applying the average function on the elements of the same cluster). Fuzzy C-Means 23 is an iterative algorithm and its purpose is to find cluster cen- ters to minimize the objective function described by the formula (2): ∑∑ (2) where p = 1, ∞), the constant that determines the fuzziness degree of the classifica- tion process, has been experimentally fixed to 6. The algorithm stop condition is described by the relation (3), where has been experimentally fixed to 0.1. 1 (3) Distances Calculation Sub-module It calculates for each group the distances, element by element, between the centroid of the Test image and the four centroids of the Training images. By comparing such distances with a threshold, experimentally fixed to 0.1, it creates 4 binary vectors, for each group (Figure 8). Fig. 8. The cells of each black vector (Test centroid and 4 Training centroid vectors) contain the centroid coordinates; the other 4 vectors represent the Binary vectors Vector Test Selection Sub-module It analyzes the 12 binary vectors and identifies the best one representing the Test image. It chooses the first among those vectors containing more 1 than 0 values. Unit Centroids Calculation Sub-module It reorganizes the 12 binary vectors in 4 units, so that each unit is composed of the 3 vectors of the same cluster. Successively, it calculates 4 centroids, one per unit, computing the average of the respective elements (Figure 9). 224 G. Vitello et al. Fig. 9. The 4 unit centroids obtained applying the average function to the 3 vectors of the same cluster (a different color is used for each cluster) Naive Bayes Sub-module It classifies the Test image using the 5 vectors, obtained by the two previous sub- modules. A Naive Bayes Classifier is a simple probabilistic classifier based on the Bayes theorem with strong independence assumptions 14. It assumes that the do- main variables are independent, given the class, and each variable has a finite number of values. Usually, the model parameters (e.g., prior class probabilities and feature probability distributions) are approximated with the relative frequencies from the training database. In the proposed work, the class probabilities of the used NIST data- base have been experimentally fixed as in Table 1, following the natural distribution of the fingerprint classes. Table 1. Class probabilities in the used NIST database NIST class Value Tented Arch 0.07 Left Loop 0.20 Right Loop 0.25 Plain Loop/Central Pocket Loop 0.48 4 Experimental Results To test the effectiveness of the proposed approach the free downloadable database PolyU 17 has been used. It contains 1480 fingerprint images belonging to the fol- lowing NIST classes: Left Loop, Right Loop, Arches (Plain and Tented) and Whorl (Plain Loop, Central Pocket Loop, Accidental Loop and Double Loop) 16. Howev- er, in the proposed work, a consistent PolyU subset of 1185 images, containing the Left Loop, Right Loop, Tented Arch, Plain Loop and Central Pocket Loop images, has been used. Since in the literature no classification systems has been tested using the PolyU database, we have performed a comparison (reported in Table 2) between the proposed approach and a standard Multilayer Perceptron (MLP) approach (50 hidden neurons), in terms of classification rate. The used Training and Test set sizes are described in Table 3. An Heuristic Approach for the Training Dataset Selection 225 Table 2. The proposed approach vs the standard MLP based approach 1 Table 3. The used Training and Test set sizes 5 Conclusions and Future Works In this paper an heuristic approach to optimize the training phase in fingerprint classi- fication tasks is described. The approach combines the classification properties of a Fuzzy C-Means clustering method and a Naive Bayes Classifier on directional image information. Unlike literature approaches using a lot of training examples, the pro- posed approach requires only 18 directional images per class. Experimental results, conducted on a consistent subset of the free PolyU database, show a classification rate of 87.59%. Similar results can be obtained with a neural network based approach if and only if a very large training set is used. Future works will be aimed to the study and implementation of an automatic tech- nique for the training dataset extraction as well as to allow system employment in a different application domain. References 1. Conti, V., Vitabile, S., Vitello, G., Sorbello, F.: An embedded biometric sensor for ubi- quitous authentication. In: Proc. of AEIT Annual IEEE Conference, pp. 1–6 (2013), doi:10.1109/AEIT.2013.6666815, ISBN: 978-8-8872-3734-4 2. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition. Springer, New York (2009) ISBN: 978-1-84882-253-5 1 150 and 100 images are used as Template dataset and Validate dataset, respectively. 226 G. Vitello et al. 3. Ballan, M., Sakarya, F.A., Evans, B.L.: A Fingerprint Classification Technique Using Di- rectional Images. In: Proc. of the 31st Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 101–104 (1997), doi:10.1109/ACSSC.1997.680037, ISSN: 1058-6393 4. Awad, A.I., Baba, K.: Singular Point Detection for Efficient Fingerprint Classification. In- ternational Journal on New Computer Architectures and their Applications (IJNCAA) 2(1), 1–7 (2012) ISSN: 2220-9085 5. Maio, D., Maltoni, D.: A Structural Approach to Fingerprint Classification. In: Proc. of the 13th International Conference on Pattern Recognition (ICPR), vol. 3, pp. 1051–4651 (1996), doi:10.1109/ICPR.1996.547013, ISSN: 1051-4651 6. Patil, S.R., Suralkar, S.R.: Fingerprint Classification using Artificial Neural Network. In- ternational Journal of Emerging Technology and Advanced Engineering 2(10), 513–517 (2012) ISSN: 2250-2459 7. Conti, V., Militello, C., Vitabile, S., Sorbello, F.: An Embedded Fingerprints Classification System based on Weightless Neural Networks. In: I O S Press (ed.) Frontiers in Artificial Intelligence and Applications. New Directions in Neural Networks, vol. 193, pp. 67–75 (2009), doi:10.3233/978-1-58603-984-4-67, ISSN: 0922-6389 8. Kamijo, M.: Classifying Fingerprint Images using Neural Network: Deriving the Classifi- cation State. In: Proc. of the IEEE International Conference on Neural Networks, vol. 3, pp. 1932–1937 (1993), doi:10.1109/ICNN.1993.298852 ISBN: 0-7803-0999-5 9. Mohamed, S.M., Nyongesa, H.O.: Automatic Fingerprint Classification System Using Fuzzy Neural Techniques. In: Proc. of the IEEE International Conference on Fuzzy Sys- tems, vol. 1, pp. 358–362 (2002), doi:10.1109/FUZZ.2002.1005016, ISBN: 0-7803-7280-8 10. Jung, H.-W., Lee, J.-H.: Fingerprint Classification Using the Stochastic Approach of Ridge Direction Information. In: Proc. of the IEEE International Conference on Fuzzy Systems, pp. 169–174 (2009), doi:10.1109/FUZZY.2009.5277309, ISSN: 1098-7584 11. Qing, S., Xisheng, L., Hui, Y., Chen, Q.: Naive Bayes Classifier applied in Droplet Fin- gerprint Recognition. In: Proc. of the 3rd Global Congress on Intelligent Systems (GCIS), pp. 152–155 (2012), doi:10.1109/GCIS.2012.68, ISBN: 978-1-4673-3072-5 12. Conti, V., Vitello, G., Sorbello, F., Vitabile, S.: An Advanced Technique for User Identifi- cation using Partial Fingerprint. In: Proc. of the 7th International IEEE Conference on Complex, Intelligent and Software Intensive Systems (CISIS), pp. 236–242 (2013), doi:10.1109/CISIS.2013.46, ISBN: 978-0-7695-4992-7 13. Conti, V., Militello, C., Vitabile, S., Sorbello, F.: Introducing Pseudo-Singularity Points for Efficient Fingerprints Classification and Recognition. In: Proc. of the 4th IEEE Interna- tional Conference on Complex, Intelligent and Software Intensive Systems (CISIS), pp. 368–375 (2010), doi:10.1109/CISIS.2010.134, ISBN: 978-0-7695-3967-6 14. Tang, Y., Pan, W., Li, H., Xu, Y.: Fuzzy Naive Bayes Classifier Based on Fuzzy Cluster- ing. In: Proc. of IEEE International Conference on Systems, Man and Cybernetics, vol. 5 (2002), doi:10.1109/ICSMC.2002.1176401, ISSN: 1062-922X 15. Vitello, G., Conti, V., Migliore, G.I.M., Sorbello, F., Vitabile, S.: A Novel Technique for Fingerprint Classification based on Fuzzy C-Means and Naive Bayes Classifier. In: Proc. of the 8th International IEEE Conference on Complex, Intelligent and Software Intensive Systems (CISIS), pp. 155–161 (2014), doi:10.1109/CISIS.2014.23, ISBN: 978-1-4799- 4325-8 16. National Institute of Standards and Technology, http://www.nist.gov 17. PolyU database, http://www.comp.polyu.edu.hk/biometrics/HRF/HRF.htm An Heuristic Approach for the Training Dataset Selection 227 18. FVC Databases, http://bias.csr.unibo.it/fvc2000/databases.asp 19. FVC Databases, http://bias.csr.unibo.it/fvc2002/databases.asp 20. Senior, A.: A Combination Fingerprint Classifier. IEEE Transaction on Pattern Analysis and Machine Intelligence 23(10), 1165–1174 (2001), doi:10.1109/34.954606, ISSN: 0162-8828 21. Pattichis, M.S., Panayi, G., Bovik, A.C., Hsu, S.-P.: Fingerprint Classification Using an AM–FM Model. IEEE Transactions on Image Processing 10(6), 951–954 (2001), doi:10.1109/83.923291, ISSN: 1057-7149 22. Batra, D., Singhal, G., Chaudhury, S.: Gabor filter based fingerprint classification using support vector machines. In: Proc. of the 1st IEEE India Annual Conference (INDICON), pp. 256–261 (2004), doi:10.1109/INDICO.2004.1497751, ISBN: 0-7803-8909-3 23. Zhang, J.-S., Leung, Y.-W.: Improved Possibilistic C-Means Clustering Algorithms. IEEE Transaction on Fuzzy Systems 12(2), 209–217 (2004), doi:10.1109/TFUZZ.2004.825079, ISSN: 1063-6706

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.