Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorYaprak, Büşranur
dc.contributor.authorGedikli, Eyüp
dc.date.accessioned2024-05-06T12:13:37Z
dc.date.available2024-05-06T12:13:37Z
dc.date.issued2024en_US
dc.identifier.citationScopus EXPORT DATE: 02 May 2024 @ARTICLE{Yaprak2024, url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85187673967&doi=10.1007%2fs11042-024-18859-9&partnerID=40&md5=bdb0cb5f246c714354818ce14deec05c}, affiliations = {Department of Software Engineering, Gümüşhane University, Gümüşhane, 29100, Turkey; Department of Software Engineering, Karadeniz Technical University, Trabzon, 61080, Turkey}, correspondence_address = {B. Yaprak; Department of Software Engineering, Gümüşhane University, Gümüşhane, 29100, Turkey; email: busra.kucukugurlu@gumushane.edu.tr}, publisher = {Springer}, issn = {13807501}, coden = {MTAPF}, language = {English}, abbrev_source_title = {Multimedia Tools Appl} }en_US
dc.identifier.urihttps://link.springer.com/article/10.1007/s11042-024-18859-9
dc.identifier.urihttps://hdl.handle.net/20.500.12440/6207
dc.description.abstractGait recognition is the process of identifying a person from a distance based on their walking patterns. However, the recognition rate drops significantly under cross-view angle and appearance-based variations. In this study, the effectiveness of the most well-known gait representations in solving this problem is investigated based on deep learning. For this purpose, a comprehensive performance evaluation is performed by combining different modalities, including silhouettes, optical flows, and concatenated image of the Gait Energy Image (GEI) head and leg region, with GEI itself. This evaluation is carried out across different multimodal deep convolutional neural network (CNN) architectures, namely fine-tuned EfficientNet-B0, MobileNet-V1, and ConvNeXt-base models. These models are trained separately on GEIs, silhouettes, optical flows, and concatenated image of GEI head and leg regions, and then extracted GEI features are fused in pairs with other extracted modality features to find the most effective gait combination. Experimental results on the two different datasets CASIA-B and Outdoor-Gait show that the concatenated image of GEI head and leg regions significantly increased the recognition rate of the networks compared to other modalities. Moreover, this modality demonstrates greater robustness under varied carrying (BG) and clothing (CL) conditions compared to optical flows (OF) and silhouettes (SF). Codes available at https://github.com/busrakckugurlu/Different-gait-combinations-based-on-multi-modal-deep-CNN-architectures.git © The Author(s) 2024.en_US
dc.language.isoengen_US
dc.publisherSpringer CODEN MTAPFen_US
dc.relation.ispartofMultimedia Tools and Applicationsen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectGait Combinationen_US
dc.subjectGait recognitionen_US
dc.subjectGEIen_US
dc.subjectMulti-modal deep CNNen_US
dc.subjectSilhouetteen_US
dc.titleDifferent gait combinations based on multi-modal deep CNN architecturesen_US
dc.typearticleen_US
dc.relation.publicationcategoryMakale - Ulusal Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.departmentFakülteler, Mühendislik ve Doğa Bilimleri Fakültesi, Yazılım Mühendisliği Bölümüen_US
dc.authorid0000-0002-6034-6850en_US
dc.contributor.institutionauthorYaprak, Büşranur
dc.identifier.doi10.1007/s11042-024-18859-9en_US
dc.authorscopusid58938260300en_US


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster