Automatic license plate number recognition

automatic license plate recognition (alpr) a state-of-the-art review automatic license plate recognition system based on color image processing pdf
Dr.NaveenBansal Profile Pic
Dr.NaveenBansal,India,Teacher
Published Date:25-10-2017
Your Website URL(Optional)
Comment
Video-Based Access Control by Automatic License Plate Recognition 1 2 1 Emanuel Di Nardo , Lucia Maddalena , and Alfredo Petrosino 1 University of Naples Parthenope, Department of Science and Technology, Naples, Italy 2 National Research Council, Institute for High-Performance Computing and Networking, Naples, Italy emanuel.dinardogmail.com, alfredo.petrosinouniparthenope.it, lucia.maddalenana.icar.cnr.it Abstract. We report an access control system based on automatic li- cense plate recognition, consisting of three main modules for acquisition, extraction, and recognition. The basic idea is to couple the online learn- ing of a neural background model with a stopped foreground subtraction mechanism to efficiently provide a subset of relevant video frames where to look for. Another key point is the use of matching the entire license plate ROI with those stored in a database of authorized license plates, based on suitable features and validation tests. Experimental results con- firm that the proposed system attains overall performance comparable with that of the state-of-the-art ALPR methods. Keywords: Automatic License Plate Recognition, Access Control System, Neural-based Vehicle Detection. 1 Introduction Automatic license plate recognition(ALPR) consists in extractingvehiclelicense plate information from images or image sequences taken by fixed or mobile cam- eras, identifying their unique associated identities 11. Examples of applications include access control, where the plate number captured by a fixed camera is used to automatically allow the entrance in restricted areas to registered users, low-enforcement, where roadside cameras are adopted to detect vehicles violat- ing traffic laws, and road patrolling, where vehicles equipped with installed or handheld cameras are adopted to monitor vehicular traffic 14. ALPR is widely regarded to be a solved problem, even though the proposed systems often are only applicable under restricted illumination, view-point, and plate specification conditions, or require specialized hardware 27. In this work, we propose an access control system (ACS) based on ALPR, designed in order to provide as much as possible recognition accuracy, at the same time relying as much as possible on off-the-shelf non-specialized hardware. Therefore, the reference setting includes a fixed, standard-resolution video camera, positioned at the entrance of a restricted access area.  c Springer International Publishing Switzerland 2015 103 S. Bassis et al. (eds.), Recent Advances of Neural Networks Models and Applications, Smart Innovation, Systems and Technologies 37, DOI: 10.1007/978-3-319-18164-6_11104 E. Di Nardo, L. Maddalena, and A. Petrosino The paper is organized as follows. In Section 2, we present a fairly compact overview of the approaches to ALPR, providing links to appropriate references for extensive surveys. Section 3 describes the basic building blocks of the pro- posedsystem.InSection4,wepresentresultsachievedwiththeproposedsystem, also providing performance comparisons with other existing systems. Section 5 includes conclusions and further research directions. 2 Related Work MostofthemodernALPRsystemsdescribedintheliteraturecanbereconducted toa three-stepschemethatincludes acquisition,extraction,andrecognition. The acquisition step is aimed at acquiring vehicle images using a camera and determining when the subsequent steps must be activated. Indeed, the continuous monitoring of the scene under surveillance is a computationally de- manding task, that could also lead to incoherent results. Therefore, most of the modern ALPR systems 19, 4, 1 implement an acquisition step that de- tects the presence of vehicles in the monitored area. Acquisition can be achieved through specialized sensors (usually infrared or ultrasound sensors) or through methodologiesthat detect new objectsin the scene, usuallybasedon background subtraction. The extraction step (often referred to as “localization” or “detection”) per- forms an automatic selection of the license plate region of interest (ROI), in order to limit the image area for applying the subsequent step. This not only reduces precessing times, but also avoids in the recognition step the presence of disturbing objects, that could generate confusion. Generally, extraction exploits license plate features in order to distinguish it by other scene objects. These features can include image edges, texture, color, spatial measurements, presence of characters, or a combination of them. For extensive and up-to-date reviews of license plate extraction approaches, the interested reader is referred to 2, 11. The task is hindered by several issues, since license plates may be differ- ent from state to state (in terms of dimensions, color, number and distribution of characters), the presence of other text areas in the scene can generate con- fusion, illumination conditions as well as plate dirtiness can strongly influence extraction accuracy. The recognition step allows the system to identify the license plate included in the detected ROI. Recognition is very often achieved by segmenting each single character and applying optical character recognition algorithms in each of the segmented regions, as witnessed by the abundant literature reported in 2, 11. Very rarely, recognition is achieved by extracting features by the entire license plate, which are then matched between the current frame and images included intoalicenseplatedataset7,10.Amongthemainissuesoftherecognitionstep are the invariance to license plate rotation and scaling, as well as to illumination conditions, which can be better handled by the second approach.Video-Based Access Control by Automatic License Plate Recognition 105 3 The Proposed System The proposed ALPR system follows the three-step scheme described in the pre- vious section in order to automatically recognize a license plate in a dataset of license plates allowed to enter a restricted access area. Our choices for the three steps are detailed in the following. 3.1 Acquisition The acquisition module is based on foreground detection, achieved by neural- based background subtraction, and stopped foreground detection, in order to detect cars that stop in the monitored area, causing a trigger alarm to be issued for opening the barrier in case the license plate of the car is recognized. For moving object detection we adopt the 3dSOBS+ algorithm 23, based on the neural background model B automatically generated and updated at each t time t by a self-organizing method. The algorithm is shown to accurately handle most of the well known issues related to background maintenance for moving object detection (moving backgrounds, gradual illumination variations, shadows cast by moving objects) and to be robust against false detections for different types of videos taken with stationary cameras. For the detection of stopped objects, we adopt the SFS algorithmproposed in 22. The basic idea consists of keeping a model F of moving foreground pixels, t that is similar to the neural model adopted for background pixels. At each time t, foreground pixels are classified as stopped pixels if their moving foreground modelholdsthesamefeaturesforatleastτ consecutiveframes,with τ stationary threshold whose choice is application dependent. The model for stopped pixels is moved to a stopped foreground model S , while remaining foreground pixels t are classified as moving pixels. 3.2 Extraction In order to extract the license plate ROI, we rely on Radon projections of the imageedges,alsoexploiting aprioriinformation onlicense plates, including their usual aspect ratio,the colorcontrastbetween charactersand background,aswell as the presence of characters in the searched area. For each sequence image I, after median filtering pre-processing, image edges are extracted through the Sobel operator. Image projections P (x)and P (y) x y are computed in the horizontal and vertical directions, respectively: h−1 w−1   P (x)= I(x,j),P (y)= I(i,y), (1) x y j=0 i=0 where w ×h is the size of I. Then, projection peaks and extremes of the peaks region that identify the license plate ROI are detected. Specifically, in the case of horizontal projections, peaks x are computed as p x =arg max P (x)(2) p x 0≤xw106 E. Di Nardo, L. Maddalena, and A. Petrosino and extremes x and x are computed as: l r x=max xP (x) ≤ c∗P (x ),x =max xP (x) ≤ c∗P (x ), l x x p r x x p 0≤x≤x x ≤xw p p (3) with c ∈ 0,1 constant value. Analogous formulas hold for vertical projections. Fig.1. License plate extraction by horizontal and vertical projections of image edges In order to make sure that all possible rectangular ROIs are taken into ac- count, we select n highest local maxima in each projection direction, for a total p 2 of n ROIs. A further euristic postprocessing of the extracted ROIs is carried p out, aimed at ensuring that each of them really includes a license plate and pruning the others: 1. Refinement according to the shape: If the size of a ROI is too much higher or lower than the expected license plate area A, the detected region is likely linked to one of the excess local maxima taken into account. Moreover, a license plate should have a higher number of holes as compared to other scene object; therefore, a region is pruned if its Euler number is less than a fixed number n . E 2. Aspect Ratio: a ROI is discarded if its aspect ratio r is too different from the expected aspect ratio. In order to take into account the acquisition noise and possible adverse illumination conditions, a ROI is pruned only if its aspect ratio is outside the range r−δ,r+δ, with δ experimentally chosen threshold. 3. Brightness analysis: a further test is based on the total brightness reflected by the plate surface. Usually, license plates have dark characters on a light background,thusshowingoverallhighbrightness.Therefore,afterconverting the ROI into the HSV color space, we compute the histogram H of the brightness component B and choose the smallest (b )andlargest(b ) min max non-empty classes of H, and their average b : med b =argmin(H(b)H(b) =0),b =argmax(H(b)H(b) =0), (4) min max b∈B b∈BVideo-Based Access Control by Automatic License Plate Recognition 107 b +b min max b = . (5) med 2 The value β given by the difference of the sums of the two identified areas of the histogram: ⎛ ⎞   b b max med−1   ⎝ ⎠ β = H(b ) − H(b ) (6) 1 2 b =b b =b 1 med 2 min provides an indication wether the ROI has a sufficient brightness (β 0) to be considered as including a license plate or should be pruned. Analo- gous reasoning can be applied to the case of light characters over a dark background. 4. Characters presence: The presence of characters in a selected ROI is checked in order to discard ROIs including less than n characters. After contrast c enhancement, the detection of characters is performed through horizontal projections of license plate ROI’s edges, in a way similar to what has been done for extracting the possible ROIs in the entire image, leading to a seg- mentation of characters into ROI’s blocks (see Fig. 2). Fig.2. Projection-based character segmentation into ROI’s blocks 3.3 Recognition In our ACS application context, the proposed recognition module relies on matching the extracted license plate ROI (testing dataset) with those stored in a database of authorized license plates (training dataset), based on suitable features and validation tests. As it will be shown also through experimental re- sults (Section 4), this approachmakesthe recognitionstep robustto illumination and position variations, to plate surface irregularities, and to partial occlusions.108 E. Di Nardo, L. Maddalena, and A. Petrosino Features representing the extracted license plate ROIs are based on Affine- SIFT (ASIFT) 29, a fully affine invariant image comparison method that is robustnotonlytotranslation,rotation,andscaling,butalsotoimagedistortions arising by the camera orientation. Similarly to the well known SIFT 21, it produces a 128-dimensional feature vector characterizing each keypoint, but it tends to use a higher number of keypoints. Feature matching to analyze the similarity of each feature vector T in the k test image with feature vectors D in the training dataset is based on nearest i neighbor search using the Euclidean distance 20, that identifies the training image having the nearest feature vector D . The adopted space partitioning 1 technique is the Randomized KD-Tree 8, 26, that iteratively subdivides the search space into sub-regions that contain half the points of the original region, using more than one search tree. Three validation tests follow, in order to exclude from the matching results those keypoints whose feature vectors have no good match in the training set: 1. The first validation test considers the Nearest Neighbor Distance Ratio (NNDR) 20, 25, that compares the closest feature vector D with the 1 second closest feature vector D belonging to a different class: 2 D −T 1 k 2 ρ , (7) 1 D −T 2 k 2 with ρ ∈ 0,1. NNDR discards a match if the L distance from the nearest 1 2 matched feature vector is not significantly different from that of a different license plate. 2. The shape validation test relies on Hu moments 15, adopted to describe the shape of objects related to the matched keypoints in a way that is in- variant to scaling, rotation, and translation. If the objects are not enough similar according to these moments, the match is discarded. Specifically, for each couple (T,D) of testing and training keypoints, the seven Hu in- T D variant moments h ,h ,j=1,...,7 are computed on the contours of the j j corresponding objects in the testing and training images. The match will be discarded if these contours are too different, i.e., if T D m −m j j Diss(T,D)= max ρ , (8) 2 T j=1,...,7 m j with T T T D D D m = sign(h )∗logh,m = sign(h )∗logh (9) j j j j j j and ρ ∈ 0,1. 2 3. As a last validation step, the homography between training and testing matched images is computed by RANSAC 12 to further prune outliers, i.e., those keypoints whose re-projection error is greater than ρ pixels. 3 At the end, a testing license plate k is recognizedas license plate j ofthe training dataset if: j=argmaxFM(T ,D ) AND maxFM(T ,D ) ≥ 3, k i k i i iVideo-Based Access Control by Automatic License Plate Recognition 109 where FM(T ,D ) indicates the number of matching testing/training features k i that have passed the three validation steps. 4 Experimental Results 4.1 Data 1 For testing the proposed ACS, we produced the ACS Video Dataset , including eight home-made color videos of size 1280 × 720, for a total of 5900 frames. These are typical ACS videos, taken from three different view-points and under different illumination conditions. Example frames of each video, identified by the license plate number, are reported in Fig. 3. BL021TA CS008PX EH246ZK ER984ZN BD691JJ CM640GG DP756YZ DW072YY Fig.3. Example frames from the ACS Video Dataset In orderto focus the attentiononthe only imageareawhere to look for license plates, for each of the three different view-points we defined a search area (see white pixels in the masks of Fig. 4) where the proposed ACS is applied. (a) (b) (c) Fig.4. Search areas for: (a) BL021TA and CS008PX; (b) EH246ZK and ER984ZN; (c) BD691JJ, CM640GG, DP756YZ, and DW072YY 1 Fortherecognitionphase,wefurtherproducedtheACS Recognition Dataset , an image database of fifty different license plates used for recognition. Example 1 TheACSVideoDatasetandtheACSRecognition Datasetare available for download at http://cvprlab.uniparthenope.it.110 E. Di Nardo, L. Maddalena, and A. Petrosino (a) (b) (c) (d) Fig.5. Example images from the ACS Recognition Dataset. Similar license plates can be observed in (c) CS008PX (original) and (d) CS000PX (digitally modified). images are reported in Fig. 5. To better test the recognition performance, this database includes also cases of very similar license plates, such as the one in Fig.5-(d)thathasbeenobtainedbydigitallymodifyingthedigit8intheoriginal license plate of Fig. 5-(c). 4.2 Acquisition Results Fig. 6 shows the results of the acquisition step described in Section 3.1 on video BL021TA of the ACS Video Dataset. As soon as single pixels are detected as moving and similar to the foreground for τ consecutive frames, they are classi- fied as stopped (red pixels in Figs. 6-(a) and (e)), and moved from the moving foreground model F (Fig. 6-(d) and (h)) to the stopped foreground model S t t (Fig. 6-(c) and (g)). Further foreground pixels previously covered by the barrier have not yet reached the stationary threshold in frame t = 420, and are still stored in the moving foreground model F (Fig. 6-(d)). t (a) (b) (c) (d) (e) (f) (g) (h) Fig.6. Acquisition step on video BL021TA of the ACS Video Dataset, frame t = 350 (first row) and t = 420 (second row): stopped foreground pixels (first column); repre- sentations of the background model B (second column), stopped foreground model S t t (third column), and moving foreground model F (fourth column) t In Table 1, we report results of the acquisition module on each sequence of the ACS Video Dataset, obtained choosing a stationary threshold τ =80(values for all remaining parameters have been chosen as in 22,23 for all the video sequences). The second column reports the number i of the frame in which the SVideo-Based Access Control by Automatic License Plate Recognition 111 Table 1. Results of stopped foreground detection on the ACS Video Dataset Video Start i of GT Stopped event S stopped event (i +τ) trigger issued S CS008PX 172 252 228 DW072YY 164 244 145 BL021TA 288 368 344 BD691JJ 249 329 222 DP756YZ 533 613 433 CM640GG 306 386 245 EH246ZK 210 290 262 ER984ZN 246 326 296 car begins stopping, the third column reports the number of the frame where the stopped object event should be detected (the “Ground Truth” - GT), while the fourth column reports the number of the frame where the stopped object event has been detected. It can be observed that the acquisition module triggers the stopped alert about one second earlier than expected. Indeed, the pixel-based approach starts signaling stopped foreground pixels of the uniformly colored auto body before the full auto front side stops. Even though this anticipation has proved to be beneficial to the system, providing further initial frames where to look for possible license plates, region-level post-processing of the stopped foreground masks could easily help in detecting only the complete object as stopped, based on the pixel-wise information. Further experimental results concerning moving and stopped object detection accuracy on publicly available sequences can be found in 22,23. 4.3 Extraction Results The extraction step for the ACS Video Dataset has been performed on all se- quence frames where stopped foreground objects have been signaled by the ac- quisition step. Examples of extracted license plate ROIs are reported in Fig. 7, where we can observe high accuracy in the identification of ROI borders. Only few extracted ROIs have been partially detected (e.g., Fig. 7-(e) includes only some of the license plate digits) or are completely wrong (e.g., Fig. 7-(f) does not include any license plate). These partial/complete failures are due to the camouflage of the car with the background, to the license plate orientation, or to the illumination conditions, that can negatively influence the segmentation, the projections, or the extraction of the license plate. In Table 2, for each sequence of the ACS Video dataset we report results of the extraction module in terms of number of correct (third column), partial (fourth column), and wrong (fifth column) extracted ROIs as compared to the total number of extracted ROIs (second column). In all the experiments, values for the extraction parameters (see Section 3.2) have been chosen based on a prioriinformation on Italian license plates and on experiments as follows: c=0.15112 E. Di Nardo, L. Maddalena, and A. Petrosino (a) (b) (c) (d) (e) (f) Fig.7. Extraction step on ACS Video Dataset: examples of correct ((a)–(d)), partial (e), and wrong (f) extracted ROIs in Eq. (3); n =3 for the number of highest local maxima in each projection p direction; expected license plate area A in the range of 20,150 × 20,150 pixels and threshold n = 3 for the Euler number (postprocessing step 1); expected E licenseaspectratior =3.27(Italianlicenseplatesstandarddimensionsarewidth 360mmandheight110mm),with δ=1 (postprocessingstep2); minimum number of character ROIs n =3 (postprocessing step 4). C It should be pointed out that, although few incomplete or wrong ROIs were extracted, the extraction step succeeded in extracting a more than sufficient number of correct license plate ROIs for the subsequent recognition module. Table 2. Results of ROI extraction on the ACS Video Dataset Video Extracted Correct Partial Wrong ROIs ROIs ROIs ROIs CS008PX 204 194 10 0 DW072YY 92 24 66 2 BL021TA 86 86 0 0 BD691JJ 85 75 9 1 DP756YZ 350 329 21 0 CM640GG 339 311 28 0 EH246ZK 336 336 0 0 ER984ZN 125 124 0 1 Avg. 91.5% 8.3 % 0.2 % In order to provide comparisons of the extraction results with those of other existing approaches, we considered the software JavaANPR 24, a system for ALPR in still images. In Table 3, for each license plate we report its results on sixty selected sequence frames of each video of the ACS Video Dataset. Here, we can observe that JavaANPR accuracy, in terms of correct/wrong extracted ROIs, is quite low for this dataset, achieving on average 42.5% of correctly extractedROIs(ascomparedtothe average91.5%oftheproposedACSreported in Table 2). 4.4 Recognition Results Examples of recognition results of testing license plates in the ACS Recognition Dataset are reported in Fig. 8. Here, we can observe that matched keypoints (green circles) perfectly match (green lines connecting them).Video-Based Access Control by Automatic License Plate Recognition 113 Table 3. Extraction results of the software JavaANPR 24 on sixty selected sequence frames of each video of the ACS Video Dataset Video Correct Partial/Wrong ROIs ROIs CS008PX 10 50 DW072YY 3 57 BL021TA 36 24 BD691JJ 21 39 DP756YZ 17 43 CM640GG 35 25 EH246ZK 32 28 ER984ZN 0 60 Avg. 42.5% 57.5% Fig.8. Recognition step: license plates extracted by the ACS Video Dataset (top of each figure) and correctly matched with license plates of the ACS Recognition Dataset (bottom of each figure). Green circles indicate keypoints common to the matched images and green lines connect matched keypoints. Table 4providesresultsofthe proposedrecognitionstep, alsocomparingthem with those obtained by an analogous recognition module, but based on SIFT, rather than ASIFT, features. For all the experiments, values for recognition parametersρ ,ρ , and ρ (see Section 3.3) have been experimentally fixed as 0.8, 1 2 3 0.2, and 5, respectively. We can observe that the recognition module perfectly recognizes all the correctly extracted license plates, notwithstanding the very similar license plates included into the ACS Recognition Dataset (Fig. 5). Such good results are strictly linked to the choice of the ASIFT feature descriptors, as verified by comparison with the well known SIFT descriptors. 4.5 Further Comparisons In order to further compare the accuracy of the proposedACS with that of other existing systems, Table 5 reports the performance of recently proposed ALPR systems, each achieved on a different dataset. Here, the Extraction Rate refers to the percentage of correctly extracted licence plate ROIs and the Recogni-114 E. Di Nardo, L. Maddalena, and A. Petrosino Table 4. Results of license plate recognition using SIFT and ASIFT features SIFT ASIFT Plate Total Correct Wrong Non Correct Wrong Non ROIs recogn. recogn. recogn. recogn. recogn. recogn. CS008PX 194 85 10 99 194 0 0 DW072YY 24 24 0 0 24 0 0 BL021TA 86 86 0 0 86 0 0 BD691JJ 75 74 1 0 75 0 0 DP756YZ 329 321 0 8 329 0 0 CM640GG 311 309 0 2 311 0 0 EH246ZK 336 336 0 0 336 0 0 ER984ZN 124 124 0 0 124 0 0 Avg. 91.9% 0.7 % 7.4 % 100% 0 % 0 % tion Rate refers to the percentage of correct plate recognitions (resulting by the product of character segmentation and character recognition rates for methods performing these two sub-steps), as reported by the respective authors. The Sys- tem Performance indicates the percentage of licence plates correctly recognized by the system, obtained as: SystemPerformance= ExtractionRate×RecognitionRate. Table 5 helps us to conclude that the proposed ACS achieves the highest Recog- nition Rate but almost the lowest Extraction Rate, even though the System Performance is comparable with that of recently proposed approaches. Further work will be devoted to enhance the ROI extraction module. Table 5. Performance comparison of different systems Method Extraction Recognition System Plate Rate Rate Performance Format 5 (2008) 91.70% 79.25% 72.67% Turkish 6 (2009) 97.30% 95.70% 93.10% Chinese 16 (2009) 98.40% 97.30% 95.70% Motorcycle 17 (2009) 95.90% 92.30% 88.52% Multinational 13 (2010) 88.10% 98.25% 86.56% Greek 18 (2010) 97.30% 86.48% 84.14% Iranian 28 (2010) 96.80% 90.00% 87.50% Taiwanese 27 (2011) 98.30% 95.20% 93.50% Multinational 9 (2011) 91.00% 95.50% 86.90% Iranian 3 (2014) 96.80% 97.52% 94.40% Iranian Proposed 91.50% 100.00% 91.50% ItalianVideo-Based Access Control by Automatic License Plate Recognition 115 5 Conclusions In this paper we propose an access control system based on automatic license plate recognition, consisting of three main modules for acquisition, extraction, and recognition.We show how the online learning of a neural backgroundmodel, coupled with a stopped foreground subtraction mechanism, can be exploited for acquisition, in order to activate the subsequent modules and provide a subset of relevant video frames where to look for. To extract the license plate ROI, we rely on Radon projections of the image edges, also exploiting a priori information on licenseplates.Therecognitionmodule,insteadofsegmentingcharactersandthen recognizing each of them, relies on matching the entire license plate ROI with those storedin a databaseof authorizedlicense plates, based on suitable features and validation tests. Experimental results show that, although the extraction module could be improved, the 100% success rate of the recognition module, that does not require online training, makes the proposed system attain overall performance comparable with that of the state-of-the-art ALPR methods. Acknowledgements. This research was supported by Project PON01 01430 PT2LOG under the Research and Competitiveness PON, funded by the Eu- ropean Union (EU) via structural funds, with the responsibility of the Italian Ministry of Education, University, and Research (MIUR). References 1. Anagnostopoulos, C.N.: License plate recognition: Abrieftutorial. IEEE Intelligent Transportation Systems Magazine 6(1), 59–67 (2014) 2. Anagnostopoulos, C.N., Anagnostopoulos, I., Psoroulas, I., Loumos, V., Kayafas, E.: License plate recognition from still images and video sequences: A survey. IEEE Transactions on Intelligent Transportation Systems 9(3), 377–391 (2008) 3. Ashtari, A., Nordin, M., Fathy, M.: An iranian license plate recognition system based on color features. IEEE Transactions on Intelligent Transportation Systems (2014) (to appear) 4. Bailey, D., Irecki, D., Lim, B.K., Yang, L.: Test bed for number plate recogni- tion applications. In: Proceedings of the First IEEE International Workshop on Electronic Design, Test and Applications, pp. 501–503 (2002) 5. Caner, H., Gecim, H., Alkar, A.: Efficient embedded neural-network-based license plate recognition system. IEEE Transactions on Vehicular Technology 57(5), 2675– 2683 (2008) 6. Chen, Z.X., Liu, C.Y., Chang, F.L., Wang, G.Y.: Automatic license-plate loca- tion and recognition based on feature salience. IEEE Transactions on Vehicular Technology 58(7), 3781–3785 (2009) 7. Comelli, P., Ferragina, P., Granieri, M., Stabile, F.: Optical recognition of motor vehicle license plates. IEEE Transactions on Vehicular Technology 44(4), 790–799 (1995)116 E. Di Nardo, L. Maddalena, and A. Petrosino 8. Dasgupta, S., Sinha, K.: Randomized partition trees for exact nearest neighbor search. CoRR abs/1302.1948 (2013) 9. Dashtban, M.H., Dashtban, Z., Bevrani, H.: A novel approach for vehicle license plate localization and recognition. Int. J. Comput. Appl. 26(11), 22–30 (2011) 10. Dlagnekov, L., Belongie, S.: Recognizing cars. Tech. Rep. CS2005-0833, CSE, UCSD (2005) 11. Du, S., Ibrahim, M., Shehata, M., Badawy, W.: Automatic license plate recognition (ALPR): A state-of-the-art review. IEEE Transactions on Circuits and Systems for Video Technology 23(2), 311–325 (2013) 12. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981) 13. Giannoukos, I., Anagnostopoulos, C.N., Loumos, V., Kayafas, E.: Operator context scanning to support high segmentation rates for real time license plate recognition. Pattern Recognition 43(11), 3866–3878 (2010) 14. Hsu, G.S., Chen, J.C., Chung, Y.Z.: Application-oriented license plate recognition. IEEE Transactions on Vehicular Technology 62(2), 552–561 (2013) 15. Hu, M.K.: Visual pattern recognition by moment invariants. IRE Transactions on Information Theory 8(2), 179–187 (1962) 16. Huang, Y.P., Chen, C.H., Chang, Y.T., Sandnes, F.E.: An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition. Expert Systems with Applications 36(5), 9260–9267 (2009) 17. Jiao, J., Ye, Q., Huang, Q.: A configurable method for multi-style license plate recognition. Pattern Recognition 42(3), 358–369 (2009) 18. Kasaei, S.H., Kasaei, S.M., Kasaei, S.A.: New morphology-based method for robust iranian car plate detection and recognition. Int. J. Comput. Theory Eng. 2(2), 264– 268 (2010) 19. Kim, K.K., Kim, K., Kim, J., Kim, H.: Learning-based approach for license plate recognition. In: Proceedings of the 2000 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing X, vol. 2, pp. 614–623 (2000) 20. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Com- put. Vision 60(2), 91–110 (2004) 21. Lowe, D.: Object recognition from local scale-invariant features. In: The Proceed- ings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1150–1157 (1999) 22. Maddalena, L., Petrosino, A.: Stopped object detection by learning foreground model in videos. IEEE Trans. Neural Net. and Learn. Sys. 24(5), 723–735 (2013) 23. Maddalena, L., Petrosino, A.: The 3dSOBS+ algorithm for moving object detec- tion. Computer Vision and Image Understanding 122(0), 65–73 (2014) 24. Martinsky, O.:Algorithmic andmathematical principlesof automatic numberplate recognition systems (2006), http://javaanpr.sourceforge.net/ 25. Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(10), 1615–1630 (2005) 26. Muja, M., Lowe, D.G.: Fast approximate nearest neighbors with automatic algo- rithm configuration. In: VISAPP International Conference on Computer Vision Theory and Applications, pp. 331–340 (2009)Video-Based Access Control by Automatic License Plate Recognition 117 27. Thome, N., Vacavant, A., Robinault, L., Miguet, S.: A cognitive and video-based approach for multinational license plate recognition. Machine Vision and Applica- tions 22(2), 389–407 (2011) 28. Wang, M.L., Liu, Y.H., Liao, B.Y., Lin, Y.S., Horng, M.F.: A vehicle license plate recognition system based on spatial/frequency domain filtering and neural net- works. In: Pan, J.-S., Chen, S.-M., Nguyen, N.T. (eds.) ICCCI 2010, Part III. LNCS, vol. 6423, pp. 63–70. Springer, Heidelberg (2010) 29. Yu, G., Morel, J.M.: A fully affine invariant image comparison method. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2009, pp. 1597–1600 (April 2009)

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.