
		<paper>
			<loc>https://jjcit.org/paper/143</loc>
			<title>UNCONSTRAINED EAR RECOGNITION USING TRANSFORMERS</title>
			<doi>10.5455/jjcit.71-1627981530</doi>
			<authors>Marwin B. Alejo</authors>
			<keywords>Deep learning,Neural networks,Transformers,Vision transformer,Data-efficient  image transformers,Ear recognition</keywords>
			<citation>16</citation>
			<views>5482</views>
			<downloads>1222</downloads>
			<received_date>3-Aug.-2021</received_date>
			<revised_date>  5-Sep.-2021</revised_date>
			<accepted_date>  12-Sep.-2021</accepted_date>
			<abstract>The  advantages of the  ears as a means of identification over other biometric modalities provided an avenue  for 
researchers to conduct biometric recognition studies on state-of-the-art computing methods. This paper presents 
a  deep  learning  pipeline  for  unconstrained  ear  recognition  using  a transformer  neural  network:  Vision 
Transformer  (ViT)  and  Data-efficient  image Transformers  (DeiTs).  The  ViT-Ear  and  DeiT-Ear  models  of  this 
study  achieved a recognition accuracy comparable or more  significant than the results of state-of-the-art CNN-
based  methods  and  other  deep  learning  algorithms.  This  study  also  determined  that  the  performance  of  Vision 
Transformer  and  Data-efficient  image  Transformer  models  works better  than that  of ResNets  without  using 
exhaustive  data  augmentation  processes.  Moreover,  this  study  observed  that  the  performance  of  ViT-Ear  is 
nearly like that of other ViT-based biometric studies.</abstract>
		</paper>


