位置: IT常识 - 正文

YOLOV5超参数设置与数据增强解析(yolov5超参数进化)

编辑:rootadmin
YOLOV5超参数设置与数据增强解析 1、YOLOV5的超参数配置文件介绍

推荐整理分享YOLOV5超参数设置与数据增强解析(yolov5超参数进化),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:yolov4调参,yolov5 参数,yolov4调参,yolov5 参数,yolov5超参数进化,yolov5超参数进化,yolov5超参数设置范围,yolov3超参数设置,内容如对您有帮助,希望把文章链接给更多的朋友!

YOLOv5有大约30个超参数用于各种训练设置。它们在*xml中定义。/data目录下的Yaml文件。更好的初始猜测将产生更好的最终结果,因此在进化之前正确地初始化这些值是很重要的。如果有疑问,只需使用缺省值,这些缺省值是为YOLOv5 COCO训练从头优化的。

YOLOv5的超参文件见data/hyp.finetune.yaml(适用VOC数据集)或者hyo.scrach.yaml(适用COCO数据集)文件

1、yolov5/data/hyps/hyp.scratch-low.yaml(YOLOv5 COCO训练从头优化,数据增强低)# Hyperparameters for low-augmentation COCO training from scratch # python train.py --batch 64 --cfg yolov5n6.yaml --weights '' --data coco.yaml --img 640 --epochs 300 --linear # See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorials lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3) 初始学习速率 lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf) ,最终OneCycleLR学习率 momentum: 0.937 # SGD momentum/Adam beta1 weight_decay: 0.0005 # optimizer weight decay 5e-4 ,权重衰变 warmup_epochs: 3.0 # warmup epochs (fractions ok) 学习率热身epoch warmup_momentum: 0.8 # warmup initial momentum 学习率热身初始动量 warmup_bias_lr: 0.1 # warmup initial bias lr 学习率热身偏执学习率 box: 0.05 # box loss gain cls: 0.5 # cls loss gain cls_pw: 1.0 # cls BCELoss positive_weight obj: 1.0 # obj loss gain (scale with pixels) obj_pw: 1.0 # obj BCELoss positive_weight iou_t: 0.20 # IoU training threshold anchor_t: 4.0 # anchor-multiple threshold # anchors: 3 # anchors per output layer (0 to ignore) fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5) #颜色亮度,色调(Hue)、饱和度(Saturation) hsv_h: 0.015 # image HSV-Hue augmentation (fraction) hsv_s: 0.7 # image HSV-Saturation augmentation (fraction) hsv_v: 0.4 # image HSV-Value augmentation (fraction) #图像旋转 degrees: 0.0 # image rotation (+/- deg) #图像平移 translate: 0.1 # image translation (+/- fraction) ##图像仿射变换的缩放比例 scale: 0.5 # image scale (+/- gain) #设置裁剪的仿射矩阵系数 shear: 0.0 # image shear (+/- deg) #透视变换 perspective: 0.0 # image perspective (+/- fraction), range 0-0.001 ,range 0-0.001 0.0:仿射变换,>0为透视变换 flipud: 0.0 # image flip up-down (probability) fliplr: 0.5 # image flip left-right (probability) mosaic: 1.0 # image mosaic (probability) mixup: 0.0 # image mixup (probability) #在mosaic启用时,才可以启用 copy_paste: 0.0 # segment copy-paste (probability),在mosaic启用时,才可以启用 2、yolov5/data/hyps/hyp.scratch-mdeia.yaml(数据增强中)# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Hyperparameters for medium-augmentation COCO training from scratch# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorialslr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.3 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 0.7 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.9 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.1 # image mixup (probability)copy_paste: 0.0 # segment copy-paste (probability)3、hyp.scratch-high.yaml(数据增强高)# YOLOv5 🚀 by Ultralytics, GPL-3.0 license# Hyperparameters for high-augmentation COCO training from scratch# python train.py --batch 32 --cfg yolov5m6.yaml --weights '' --data coco.yaml --img 1280 --epochs 300# See tutorials for hyperparameter evolution https://github.com/ultralytics/yolov5#tutorialslr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.1 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.3 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 0.7 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.9 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.1 # image mixup (probability)copy_paste: 0.1 # segment copy-paste (probability)2、OneCycleLR学习率

根据“OneCycleLR学习率”策略,设置各参数组的学习率。1cycle策略将学习率从初始学习率退火到最大学习率,然后从最大学习率退火到远低于初始学习率的最小学习率。论文地址

3、Warmup

warmup是一种学习率优化方法,最早出现在resnet论文中,在模型训练初期选用较小的学习率,训练一段时间之后(10epoch 或者 10000steps)使用预设的学习率进行训练

为什么使用

模型训练初期,权重随机化,对数据的理解为0,在第一个epoch中,模型会根据输入的数据进行快速的调参,此时如果采用较大的学习率,有很大的可能使模型学偏,后续需要更多的轮次才能拉回来

当模型训练一段时间之后,对数据有一定的先验知识,此时使用较大的学习率模型不容易学偏,可以使用较大的学习率加速训练。

当模型使用较大的学习率训练一段时间之后,模型的分布相对比较稳定,此时不宜从数据中再学到新的特点,如果继续使用较大的学习率会破坏模型的稳定性,而使用较小的学习率更获得最优。

YOLOV5超参数设置与数据增强解析(yolov5超参数进化)

Pytorch内部并没有warmup的接口,为此需要使用第三方包pytorch_warmup ,可以使用命令pip install pytorch_warmup进行安装

1、当学习率计划使用全局迭代数时,未调优的线性预热可以这样使用:import torchimport pytorch_warmup as warmupoptimizer = torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), weight_decay=0.01)num_steps = len(dataloader) * num_epochslr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_steps)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for batch in dataloader: optimizer.zero_grad() loss = ... loss.backward() optimizer.step() with warmup_scheduler.dampening(): lr_scheduler.step()2、如果你想使用PyTorch 1.4.0或更高版本支持的学习率调度“链接”,你可以简单地给出一组with语句的学习率调度程序代码:lr_scheduler1 = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)lr_scheduler2 = torch.optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for batch in dataloader: ... optimizer.step() with warmup_scheduler.dampening(): lr_scheduler1.step() lr_scheduler2.step()3、当学习率计划使用epoch号时,预热计划可以这样使用:lr_scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[num_epochs//3], gamma=0.1)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)for epoch in range(1,num_epochs+1): for iter, batch in enumerate(dataloader): optimizer.zero_grad() loss = ... loss.backward() optimizer.step() if iter < len(dataloader)-1: with warmup_scheduler.dampening(): pass with warmup_scheduler.dampening(): lr_scheduler.step()4、Warmup Schedules1、Manual Warmup

预热因子w(t)取决于预热期,必须手动指定线性预热和指数预热。

1、 Linearw(t) = min(1, t / warmup_period)warmup_scheduler = warmup.LinearWarmup(optimizer, warmup_period=2000)2、 Exponentialwarmup_period = 1 / (1 - beta2)warmup_scheduler = warmup.UntunedExponentialWarmup(optimizer)3、 RAdam Warmup

The warmup factor depends on Adam’s beta2 parameter for RAdamWarmup. Please see the original paper for the details.

warmup_scheduler = warmup.RAdamWarmup(optimizer)4、 Apex’s Adam

The Apex library provides an Adam optimizer tuned for CUDA devices, FusedAdam. The FusedAdam optimizer can be used with the warmup schedulers. For example:

optimizer = apex.optimizers.FusedAdam(params, lr=0.001, betas=(0.9, 0.999), weight_decay=0.01)lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=num_steps)warmup_scheduler = warmup.UntunedLinearWarmup(optimizer)4、YOLOV5数据增强(yolov5-v6\utils\datasets.py)

目标检测 YOLOv5 - 数据增强 Yolov5(v6.1)数据增强方式解析 一旦训练开始,您可以在train_batch*.jpg图像中查看增强策略的效果。这些图像将在你的火车日志目录中,通常是yolov5/runs/train/exp: train_batch0.jpg shows train batch 0 mosaics and labels:

5、 YOLOv5集成Albumentations,添加新的数据增强方法

To use albumentations simply pip install -U albumentations and then update the augmentation pipeline as you see fit in the new Albumentations class in yolov5/utils/augmentations.py. Note these Albumentations operations run in addition to the YOLOv5 hyperparameter augmentations, i.e. defined in hyp.scratch.yaml.

Here’s an example that applies Blur, MedianBlur and ToGray albumentations in addition to the YOLOv5 hyperparameter augmentations normally applied to your training mosaics 😃

class Albumentations: # YOLOv5 Albumentations class (optional, used if package is installed) def __init__(self): self.transform = None try: import albumentations as A check_version(A.__version__, '1.0.3') # version requirement self.transform = A.Compose([ A.Blur(blur_limit=50, p=0.1), A.MedianBlur(blur_limit=51, p=0.1), A.ToGray(p=0.3)], bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels'])) logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms)) except ImportError: # package not installed, skip pass except Exception as e: logging.info(colorstr('albumentations: ') + f'{e}') def __call__(self, im, labels, p=1.0): if self.transform and random.random() < p: new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0]) # transformed im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])]) return im, labels

##您可以在YOLOv5数据加载器中集成额外的Albumentations增强功能: 在YOLOv5数据加载器中插入albumentaugment功能的最佳位置是这里:

if self.augment: # Augment imagespace if not mosaic: img, labels = random_perspective(img, labels, degrees=hyp['degrees'], translate=hyp['translate'], scale=hyp['scale'], shear=hyp['shear'], perspective=hyp['perspective']) # Augment colorspace augment_hsv(img, hgain=hyp['hsv_h'], sgain=hyp['hsv_s'], vgain=hyp['hsv_v']) # Apply cutouts # if random.random() < 0.9: # labels = cutout(img, labels)

其中img为图像,label为边框标签。请注意,您添加的任何albuments增强都将是对超参数文件中定义的现有自动YOLOv5增强的补充:

6、定义评估指标

健康是我们追求的价值最大化。在YOLOv5中,我们将默认适应度函数定义为指标的加权组合:mAP@0.5占权重的10%,mAP@0.5:0.95占剩余的90%,没有Precision P和Recall R。您可以根据自己的需要进行调整,或者使用默认的适合度定义(推荐)。

yolov5/utils/metrics.pyLines 12 to 16 in 4103ce9 def fitness(x): # Model fitness as a weighted combination of metrics w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] return (x[:, :4] * w).sum(1) 7、 Evolve(模型参数更新进化)# Single-GPUpython train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --evolve# Multi-GPUfor i in 0 1 2 3 4 5 6 7; do sleep $(expr 30 \* $i) && # 30-second delay (optional) echo 'Starting GPU '$i'...' && nohup python train.py --epochs 10 --data coco128.yaml --weights yolov5s.pt --cache --device $i --evolve > evolve_gpu_$i.log &done# Multi-GPU bash-while (not recommended)for i in 0 1 2 3 4 5 6 7; do sleep $(expr 30 \* $i) && # 30-second delay (optional) echo 'Starting GPU '$i'...' && "$(while true; do nohup python train.py... --device $i --evolve 1 > evolve_gpu_$i.log; done)" &done# YOLOv5 Hyperparameter Evolution Results# Best generation: 287# Last generation: 300# metrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95, val/box_loss, val/obj_loss, val/cls_loss# 0.54634, 0.55625, 0.58201, 0.33665, 0.056451, 0.042892, 0.013441lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)lrf: 0.2 # final OneCycleLR learning rate (lr0 * lrf)momentum: 0.937 # SGD momentum/Adam beta1weight_decay: 0.0005 # optimizer weight decay 5e-4warmup_epochs: 3.0 # warmup epochs (fractions ok)warmup_momentum: 0.8 # warmup initial momentumwarmup_bias_lr: 0.1 # warmup initial bias lrbox: 0.05 # box loss gaincls: 0.5 # cls loss gaincls_pw: 1.0 # cls BCELoss positive_weightobj: 1.0 # obj loss gain (scale with pixels)obj_pw: 1.0 # obj BCELoss positive_weightiou_t: 0.20 # IoU training thresholdanchor_t: 4.0 # anchor-multiple threshold# anchors: 3 # anchors per output layer (0 to ignore)fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)hsv_h: 0.015 # image HSV-Hue augmentation (fraction)hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)hsv_v: 0.4 # image HSV-Value augmentation (fraction)degrees: 0.0 # image rotation (+/- deg)translate: 0.1 # image translation (+/- fraction)scale: 0.5 # image scale (+/- gain)shear: 0.0 # image shear (+/- deg)perspective: 0.0 # image perspective (+/- fraction), range 0-0.001flipud: 0.0 # image flip up-down (probability)fliplr: 0.5 # image flip left-right (probability)mosaic: 1.0 # image mosaic (probability)mixup: 0.0 # image mixup (probability)copy_paste: 0.0 # segment copy-paste (probability)

我们建议至少300代的进化才能获得最好的结果。请注意,进化通常是昂贵和耗时的,因为基本场景要训练数百次,可能需要数百或数千个GPU小时。

8、 超参数可视化

evolve.csv is plotted as evolve.png by utils.plots.plot_evolve() after evolution finishes with one subplot per hyperparameter showing fitness (y axis) vs hyperparameter values (x axis). Yellow indicates higher concentrations. Vertical distributions indicate that a parameter has been disabled and does not mutate. This is user selectable in the meta dictionary in train.py, and is useful for fixing parameters and preventing them from evolving.

本文链接地址:https://www.jiuchutong.com/zhishi/290649.html 转载请保留说明!

上一篇:野外探险家亚历克斯·彼得森在胡德山南侧快速滑翔,俄勒冈 (© Richard Hallman/DEEPOL by plainpicture)(野外生存探险家)

下一篇:圣三一教堂,英国埃文河畔斯特拉特福 (© James Osmond/Getty Images)(圣三一教堂英文)

  • 小规模缴纳企业所得税会计分录
  • 异地预缴怎么做账
  • 股东放弃公司债权
  • 分公司与总公司承担责任的法律规定
  • 个税登记app
  • 私营企业实行固定税率
  • 银行收回客户误扣的费用怎么处理?
  • 无形资产土地需要折旧吗
  • 费用报销凭证是出纳的吗
  • 暂估收入会计处理
  • 几种更正法
  • 贷款本金余额具体是什么意思
  • 总公司与分公司的税务关系怎么交税
  • 购买现金支票的扣费方式
  • 做进项转出的票据有哪些
  • 企业所缴税款能抵税吗
  • 收回委托加工物资的账务处理
  • 怎么判断企业是一般纳税人还是小规模
  • mac的键盘怎么打开
  • 华为鸿蒙os2.0发布
  • windows11怎么设置壁纸
  • windows 清空剪贴板
  • 定期定额征收的个体户怎么报税
  • 净资产收益率摊薄
  • 研发费用的归集范围
  • 评估增值资产如何记账
  • 房地产项目公司是什么意思
  • 认缴资本 实收资本
  • 有限责任公司股东向股东以外的人转让股权
  • 正则表达式大全(整理版)
  • php自定义字段
  • php十进制转二进制算法
  • vue watch监听localstorage变化
  • vue动态路由是什么
  • 保障金计入什么科目
  • 退役士兵抵税2019
  • 养殖场租赁发票图片
  • 银行汇票和银行承兑汇票的区别
  • python数组合并并排序
  • 使用增值税发票违法吗
  • 发票已入账但未认证的进项税额
  • 银行开户有几种账户
  • 农产品普通发票进项税抵扣
  • 营销策划合作协议
  • mysql显示数据库中的表
  • 公司雇佣退休人员工资怎么发
  • 存货价值的计算公式
  • 有限公司结业清算
  • 预付卡的含义
  • 费用报销单和费用核销单一样吗
  • 免税收入包括哪些项目
  • 存出投资款应计入什么科目
  • 物流货到付款可以吗
  • 营改增人力资源服务
  • 公司食堂招待客户,客户付了钱怎么入账
  • 外贸企业出口退税账务处理
  • 营改增后工程项目计价规则也随之发生了改变对还是错
  • 没有期初数据如何建账
  • sql server错误和使用情况报告
  • mysql 5.7.18 winx64 免安装 配置方法
  • xp电脑显示屏显示不全
  • sf2是什么文件
  • 使用windows管理无线
  • windows8.1应用商店打不开
  • win10系统免费升级
  • win7 64位旗舰版设置插上耳机就能播放声音拔下耳机就自动禁音方法
  • cocos2dx场景切换
  • 实例讲解命局与大运流年作用
  • android 一个activity多个layout
  • git 登陆用户
  • vue.js computed
  • linux多线程编程实例
  • javascript要怎么学
  • js的自定义属性
  • jquery attr和prop
  • 对税务总局党组织的评价
  • 税局如何查账
  • 陕西省电力app
  • 江西省税务局发票查询系统
  • 青岛新农合咨询电话
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设