这里记录一下将yolox用于训练自己的数据集(coco格式),这里yolox的github地址

Environment

1
2
3
4
5
6
conda create -n yolox python=3.8
conda activate yolox
pip install torch==1.8
cd yolox
pip install -r requirements.txt
python setup.py develop

Install pycocotools

1
2
3
4
git clone https://github.com/cocodataset/cocoapi
cd cocoapi/PythonAPI/
# cd pycocotools-2.0.2
python setup.py build_ext install

Pretrained Model

Download the latest pre-trained weights and place them under the project yolox.

Test Demo

1
python tools/demo.py image -f exps/default/yolox_s.py -c yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu

Data preparation

coco
├──annotations
   ├──instances_train2017.json
   ├──instances_val2017.json
├──train2017
   ├──images
├──val2017
   ├──images

1
2
cd yolox
ln -s coco datasets/coco # 将制作好的coco数据集软链接到datasets下
  1. modify exps/example/custom/yolox_s.py as follows:
1
2
3
4
5
6
# Define yourself dataset path
self.data_dir = "datasets/coco"
self.train_ann = "instances_train2017.json"
self.val_ann = "instances_val2017.json"

self.num_classes = 3
  1. then, modify the categories in yolox/data/datasets/coco_classes.py
  2. modify YOLOX/yolox/exp/yolox_base.py
1
2
3
4
5
6
7
8
9
10
11
12
13
class Exp(BaseExp):
def __init__(self):
super().__init__()

# ---------------- model config ---------------- #
self.num_classes = 1
self.depth = 1.00
self.width = 1.00

# ---------------- dataloader config ---------------- #
# set worker to 4 for shorter dataloader init time
self.data_num_workers = 4
self.input_size = (8480, 480) # (height, width)

Training

1
python tools/train.py -f exps/example/custom/yolox_s.py -d 1 -b 8 --fp16 -c yolox_s.pth

Testing

1
python tools/demo.py image -f exps/example/custom/yolox_s.py -c ./YOLOX_outputs/yolox_s/best_ckpt.pth --path path-to-your-image --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu

Export_onnx

1
python tools/export_onnx.py --output-name yolox_s.onnx -f exps/example/custom/yolox_s.py -c ./YOLOX_outputs/yolox_s/best_ckpt.pth

OnnxRuntime Demo

1
python3 demo/ONNXRuntime/onnx_inference.py -m <ONNX_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s 0.3 --input_shape 640,640

Thanks

Welcome to comment and reply daily☼