国产欧美精品一区二区,中文字幕专区在线亚洲,国产精品美女网站在线观看,艾秋果冻传媒2021精品,在线免费一区二区,久久久久久青草大香综合精品,日韩美aaa特级毛片,欧美成人精品午夜免费影视

基于Swin Transformer的YOLOv5安全帽佩戴檢測方法
DOI:
CSTR:
作者:
作者單位:

韶關(guān)學(xué)院智能工程學(xué)院

作者簡(jiǎn)介:

通訊作者:

中圖分類(lèi)號:

TP391

基金項目:

廣東大學(xué)生科技創(chuàng )新培育專(zhuān)項資金資助項目(pdjh2022b0470)


YOLOv5 helmet wearing detection method based on Swin Transformer
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 圖/表
  • |
  • 訪(fǎng)問(wèn)統計
  • |
  • 參考文獻
  • |
  • 相似文獻
  • |
  • 引證文獻
  • |
  • 資源附件
  • |
  • 文章評論
    摘要:

    針對目前施工現場(chǎng)的安全帽檢測方法存在遮擋目標檢測難度大、誤檢漏檢率高的問(wèn)題,提出一種改進(jìn)YOLOv5的安全帽檢測方法;首先,使用K-means++聚類(lèi)算法重新設計匹配安全帽數據集的先驗錨框尺寸;其次,使用Swin Transformer作為YOLOv5的骨干網(wǎng)絡(luò )來(lái)提取特征,基于可移位窗口的多頭自注意力機制能建模不同空間位置特征之間的依賴(lài)關(guān)系,有效地捕獲全局上下文信息,具有更好的特征提取能力;再次,提出C3-Ghost模塊,基于Ghost Bottleneck對YOLOv5的C3模塊進(jìn)行改進(jìn), 旨在通過(guò)低成本的操作生成更多有價(jià)值的冗余特征圖,有效減少模型參數和計算復雜度;最后,基于雙向特征金字塔網(wǎng)絡(luò )跨尺度特征融合的結構優(yōu)勢提出新型跨尺度特征融合模塊,更好地適應不同尺度的目標檢測任務(wù);實(shí)驗結果表明,與原始YOLOv5相比,改進(jìn)的YOLOv5在安全帽檢測任務(wù)上的mAP@.5:.95指標提升了2.3%,滿(mǎn)足復雜施工場(chǎng)景下安全帽佩戴檢測的準確率要求。

    Abstract:

    Aiming at the problems of difficult detection of occluded objects and high false detection and missed detection rate in the current helmet detection methods on construction sites, an improved YOLOv5 helmet detection method is proposed in this paper. First, use the K-means++ clustering algorithm to redesign the prior anchor box size to match the helmet dataset. Second, Swin Transformer is used as the backbone network of YOLOv5 to extract features. The multi-head self-attention mechanism based on shiftable windows can model the dependencies between different spatial location features, effectively capture global context information, and have better Feature extraction capability. Third, the C3-Ghost module is proposed, which improves the C3 module of YOLOv5 based on Ghost Bottleneck, aiming to generate more valuable redundant feature maps through low-cost operations, effectively reducing model parameters and computational complexity. Fourth, a new feature fusion module is proposed based on the structural advantages of cross-scale feature fusion of bidirectional feature pyramid network, which can better adapt to target detection tasks of different scales. The experimental results show that compared with the original YOLOv5, the mAP@.5:.95 index of the improved YOLOv5 on the helmet detection task is improved by 2.3 percentage points, which meets the accuracy requirements of helmet wearing detection in complex construction scenarios.

    參考文獻
    相似文獻
    引證文獻
引用本文

鄭楚偉,林輝.基于Swin Transformer的YOLOv5安全帽佩戴檢測方法計算機測量與控制[J].,2023,31(3):15-21.

復制
分享
文章指標
  • 點(diǎn)擊次數:
  • 下載次數:
  • HTML閱讀次數:
  • 引用次數:
歷史
  • 收稿日期:2022-07-09
  • 最后修改日期:2022-08-15
  • 錄用日期:2022-08-16
  • 在線(xiàn)發(fā)布日期: 2023-03-15
  • 出版日期:
文章二維碼
若羌县| 文水县| 新田县| 日照市| 囊谦县| 华亭县| 许昌市| 措勤县| 方城县| 弥勒县| 文昌市| 贡觉县| 东兴市| 吉水县| 梅河口市| 广饶县| 盐山县| 巴塘县| 中方县| 运城市| 雅江县| 衢州市| 内江市| 彭阳县| 保康县| 上饶县| 承德县| 西丰县| 德令哈市| 长沙市| 禄丰县| 广南县| 上林县| 苍梧县| 黄平县| 安图县| 临猗县| 保康县| 探索| 浦县| 乳源|