zf py faster rcnnn怎么看训练成功没

你的浏览器禁用了JavaScript, 请开启后刷新浏览器获得更好的体验!
最近在拿自己的数据微调faster-rcnn,然后数据集的设置是仿照VOC2007的设置,但是可以看到VOC2007里设置了vla.txt和train.txt这个两个文件,但是我并没有看到py-faster-rcnn在训练的时候设置交叉验证的方式,想知道py-faster-rcnn在训练的时候用到交叉验证的方式了么,如果用到了,想问一下是哪一个交叉验证的方法,在哪里可以修改?
没有用到交叉验证
要回复问题请先或
student of NEU
关注: 2 人Faster RCNN代码理解(Python) ---训练过程_ASP.NET技巧_动态网站制作指南
Faster RCNN代码理解(Python) ---训练过程
来源:人气:202
最近开始学习深度学习,看了下Faster RCNN的代码,在学习的过程中也查阅了很多其他人写的博客,得到了很大的帮助,所以也打算把自己一些粗浅的理解记录下来,一是记录下自己的菜鸟学习之路,方便自己过后查阅,二来可以回馈网络。目前编程能力有限,且是第一次写博客,中间可能会有一些错误。
第一步准备第二步Stage 1 RPN init from ImageNet model
在config参数的基础上改动参数以适合当前任务主要有初始化化caffe准备roidb和imdb设置输出路径output_dir
get_output_dirimdb函数在config中用来保存中间生成的caffemodule等正式开始训练保存最后得到的权重参数
第三步Stage 1 RPN generate oposals
关注rpn_generate函数保存得到的proposal文件
第四步Stage 1 Fast R-CNN using RPN proposals init from ImageNet
model第五步Stage 2 RPN init from stage 1 Fast R-CNN model第六步Stage 2 RPN generate proposals第七步Stage 2 Fast R-CNN init from stage 2 RPN R-CNN model第八步输出最后模型AnchorTargetLayer和ProposalLayer代码文件夹说明
toolsRPNnms
参考原文地址
第一步,准备
从train_faster_rcnn_alt_opt.py入:
初始化参数:args = parse_args() 采用的是Python的argparse
主要有–net_name,–gpu,–cfg等(在cfg中只是修改了几个参数,其他大部分参数在congig.py中,涉及到训练整个网络)。cfg_from_file(args.cfg_file) 这里便是代用config中的函数cfg_from_file来读取前面cfg文件中的参数,同时调用_merge_a_into_b函数把所有的参数整合,其中__C = edict() cfg = __C cfg是一个词典(edict)数据结构。faster rcnn采用的是多进程,mp_queue是进程间用于通讯的数据结构
import multrocessing as mp
mp_queue = mp.Queue()1212
同时solvers, max_iters, rpn_test_prototxt = get_solvers(args.net_name)得到solver参数
接下来便进入了训练的各个阶段。
cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
init_model=args.pretrained_model,
solver=solvers[0],
max_iters=max_iters[0],
p = mp.Process(target=train_rpn, kwargs=mp_kwargs)
rpn_stage1_out = mp_queue.get()
p.join()101112
可以看到第一个步骤是用ImageNet的模型M0来Finetuning RPN网络得到模型M1。以训练为例,这里的args参数都在脚本 experiments/scrips/faster_rcnn_alt_opt.sh中找到。主要关注train_rpn函数。
对于train_rpn函数,主要分一下几步:
1.在config参数的基础上改动参数,以适合当前任务,主要有
cfg.TRAIN.HAS_RPN = True
cfg.TRAIN.BBOX_REG = False
# applies only to Fast R-CNN bbox regression
cfg.TRAIN.PROPOSAL_METHOD = 'gt'123123
这里,关注proposal method 使用的是gt,后面会使用到gt_roidb函数,重要。
2. 初始化化caffe
3. 准备roidb和imdb
主要涉及到的函数get_roidb
在get_roidb函数中调用factory中的get_imdb根据__sets[name]中的key(一个lambda表达式)转到pascol_voc类。class pascal_voc(imdb)在初始化自己的时候,先调用父类的初始化方法,例如:
year:’2007’
image _set:’trainval’
devkit _path:’data/VOCdevkit2007’
data _path:’data /VOCdevkit2007/VOC2007’
classes:(…)_如果想要训练自己的数据,需要修改这里_
class _to _ind:{…} _一个将类名转换成下标的字典 _
建立索引0,1,2....
image _ext:’.jpg’
image _index: [‘000001’,’000003’,……]_根据trainval.txt获取到的image索引_
roidb _handler: &Method gt_roidb &
&Object uuid &
comp _id:’comp4’
config:{…}
注意,在这里,并没有读入任何数据,只是建立了图片的索引。
imdb.set_proposal_method(cfg.TRAIN.PROPOSAL_METHOD)11
设置proposal方法,接上面,设置为gt,这里只是设置了生成的方法,第一次调用发生在下一句,roidb = get_training_roidb(imdb) –& append_flipped_images()时的这行代码:“boxes = self.roidb[i][‘boxes’].copy()”,其中get_training_roidb位于train.py,主要实现图片的水平翻转,并添加回去。实际是该函数调用了imdb. append_flipped_images也就是在这个函数,调用了pascal_voc中的gt_roidb,转而调用了同一个文件中的_load_pascal_annotation,该函数根据图片的索引,到Annotations这个文件夹下去找相应的标注数据,然后加载所有的bounding
box对象,xml的解析到此结束,接下来是roidb中的几个类成员的赋值:
boxes 一个二维数组,每一行存储 xmin ymin xmax ymaxgt _classes存储了每个box所对应的类索引(类数组在初始化函数中声明)gt _overlap是一个二维数组,共有num _classes(即类的个数)行,每一行对应的box的类索引处值为1,其余皆为0,后来被转成了稀疏矩阵seg _areas存储着某个box的面积flipped 为false 代表该图片还未被翻转(后来在train.py里会将翻转的图片加进去,用该变量用于区分
最后将这些成员变量组装成roidb返回。
在get_training_roidb函数中还调用了roidb中的prepare_roidb函数,这个函数就是用来准备imdb 的roidb,给roidb中的字典添加一些属性,比如image(图像的索引),width,height,通过前面的gt _overla属性,得到max_classes和max_overlaps.
return roidb,imdb11
4. 设置输出路径,output_dir = get_output_dir(imdb),函数在config中,用来保存中间生成的caffemodule等
5.正式开始训练
model_paths = train_net(solver, roidb, output_dir,
pretrained_model=init_model,
max_iters=max_iters)123123
调用train中的train_net函数,其中,首先filter_roidb,判断roidb中的每个entry是否合理,合理定义为至少有一个前景box或背景box,roidb全是groudtruth时,因为box与对应的类的重合度(overlaps)显然为1,也就是说roidb起码要有一个标记类。如果roidb包含了一些proposal,overlaps在[BG_THRESH_LO, BG_THRESH_HI]之间的都将被认为是背景,大于FG_THRESH才被认为是前景,roidb 至少要有一个前景或背景,否则将被过滤掉。将没用的roidb过滤掉以后,返回的就是filtered_roidb。在train文件中,需要关注的是SolverWrapper类。详细见train.py,在这个类里面,引入了caffe
SGDSlover,最后一句self.solver.NET.layers[0].set_roidb(roidb)将roidb设置进layer(0)(在这里就是ROILayer)调用ayer.py中的set_roidb方法,为layer(0)设置roidb,同时打乱顺序。最后train_model。在这里,就需要去实例化每个层,在这个阶段,首先就会实现ROIlayer,详细参考layer中的setup,在训练时roilayer的forward函数,在第一个层,只需要进行数据拷贝,在不同的阶段根据prototxt文件定义的网络结构拷贝数据,blobs
= self._get_next_minibatch()这个函数读取图片数据(调用get_minibatch函数,这个函数在minibatch中,主要作用是为faster rcnn做实际的数据准备,在读取数据的时候,分出了boxes,gt_boxes,im_info(宽高缩放)等)。
第一个层,对于stage1_rpn_train.pt文件中,该layer只有3个top blob:’data’、’im_info’、’gt_boxes’。
对于stage1_fast_rcnn_train.pt文件中,该layer有6个top blob:top: ‘data’、’rois’、’labels’、’bbox_targets’、’bbox_inside_weights’、’bbox_outside_weights’,这些数据准备都在minibatch中。至此后数据便在caffe中流动了,直到训练结束。
画出网络的结构 这里只截取了一部分:
值得注意的是在rpn-data层使用的是AnchorTargetLayer,该层使用Python实现的,往后再介绍。
6.保存最后得到的权重参数
rpn_stage1_out = mp_queue.get()11
至此,第一阶段完成,在后面的任务开始时,如果有需要,会在这个输出的地址找这一阶段得到的权重文件。
这一步就是调用上一步训练得到的模型M1来生成proposal P1,在这一步只产生proposal,参数:
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
rpn_model_path=str(rpn_stage1_out['model_path']),
rpn_test_prototxt=rpn_test_prototxt)
p = mp.Process(target=rpn_generate, kwargs=mp_kwargs)
rpn_stage1_out['proposal_path'] = mp_queue.get()['proposal_path']
1.关注rpn_generate函数
前面和上面讲到的train_rpn基本相同,从rpn_proposals = imdb_proposals(rpn_net, imdb)开始,imdb_proposals函数在rpn.generate.py文件中,rpn_proposals是一个列表的列表,每个子列表。对于imdb_proposals,使用im = cv2.imread(imdb.image_path_at(i))读入图片数据,调用 im_proposals生成单张图片的rpn proposals,以及得分。这里,im_proposals函数会调用网络的forward,从而得到想要的boxes和scores,这里需要好好理解blobs_out
= net.forward(data,im_info)中net forward和layer forward间的调用关系。
在这里,也会有proposal,同样会使用python实现的ProposalLayer,这个函数也在rpn文件夹内,后面再补充。
boxes = blobs_out['rois'][:, 1:].copy() / scale
scores = blobs_out['scores'].copy()
return boxes, scores123123
至此,得到imdb proposal
2.保存得到的proposal文件
queue.put({'proposal_path': rpn_proposals_path})
rpn_stage1_out['proposal_path'] = mp_queue.get()['proposal_path']1212
至此,Stage 1 RPN, generate proposals结束
cfg.TRAIN.SNAPSHOT_INFIX = 'stage1'
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
init_model=args.pretrained_model,
solver=solvers[1],
max_iters=max_iters[1],
rpn_file=rpn_stage1_out['proposal_path'])
p = mp.Process(target=train_fast_rcnn, kwargs=mp_kwargs)
fast_rcnn_stage1_out = mp_queue.get()
这一步,用上一步生成的proposal,以及imagenet模型M0来训练fast-rcnn模型M2。
关注train_fast_rcnn
同样地,会设置参数,这里注意cfg.TRAIN.PROPOSAL_METHOD = ‘rpn’ 不同于前面,后面调用的将是rpn_roidb。cfg.TRAIN.IMS_PER_BATCH = 2,每个mini-batch包含两张图片,以及它们proposal的roi区域。且在这一步是有rpn_file的(后面和rpn_roidb函数使用有关)。其他的和前面差不多。提一下,这里在train_net的时候,会调用add_bbox_regression_targets位于roidb中,主要是添加bbox回归目标,即添加roidb的‘bbox_targets’属性,同时根据cfg中的参数设定,求取bbox_targets的mean和std,因为需要训练class-specific
regressors在这里就会涉及到bbox_overlaps函数,放在util.bbox中。
要注意的是在这一步get_roidb时,如前所说,使用的是rpn_roidb,会调用imdb. create_roidb_from_box_list该方法功能是从box_list中读取每张图的boxes,而这个box_list就是从上一步保存的proposal文件中读取出来的,然后做一定的处理,详细见代码,重点是在最后会返回roidb,rpn_roidb中的gt_overlaps是rpn_file中的box与gt_roidb中box的gt_overlaps等计算IoU等处理后得到的,而不像gt_roidb()方法生成的gt_roidb中的gt_overlaps全部为1.0。同时使用了imdb.merge_roidb,类imdb的静态方法【这里不太懂,需要再学习下】,把rpn_roidb和gt_roidb归并为一个roidb,在这里,需要具体去了解合并的基本原理。
cfg.TRAIN.SNAPSHOT_INFIX = 'stage2'
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
init_model=str(fast_rcnn_stage1_out['model_path']),
solver=solvers[2],
max_iters=max_iters[2],
p = mp.Process(target=train_rpn, kwargs=mp_kwargs)
rpn_stage2_out = mp_queue.get()
这部分就是利用模型M2练rpn网络,这一次与stage1的rpn网络不通,这一次conv层的参数都是不动的,只做前向计算,训练得到模型M3,这属于微调了rpn网络。
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
rpn_model_path=str(rpn_stage2_out['model_path']),
rpn_test_prototxt=rpn_test_prototxt)
p = mp.Process(target=rpn_generate, kwargs=mp_kwargs)
rpn_stage2_out['proposal_path'] = mp_queue.get()['proposal_path']
这一步,基于上一步得到的M3模型,产生proposal P2,网络结构和前面产生proposal P1的一样。
cfg.TRAIN.SNAPSHOT_INFIX = 'stage2'
mp_kwargs = dict(
queue=mp_queue,
imdb_name=args.imdb_name,
init_model=str(rpn_stage2_out['model_path']),
solver=solvers[3],
max_iters=max_iters[3],
rpn_file=rpn_stage2_out['proposal_path'])
p = mp.Process(target=train_fast_rcnn, kwargs=mp_kwargs)
fast_rcnn_stage2_out = mp_queue.get()
这一步基于模型M3和P2训练fast rcnn得到最终模型M4,这一步,conv层和rpn都是参数固定,只是训练了rcnn层(也就是全连接层),与stage1不同,stage1只是固定了rpn层,其他层还是有训练。模型结构与stage1相同:
第八步,输出最后模型
final_path = os.path.join(
os.path.dirname(fast_rcnn_stage2_out['model_path']),
args.net_name + '_faster_rcnn_final.caffemodel')
print 'cp {} -& {}'.format(
fast_rcnn_stage2_out['model_path'], final_path)
shutil.copy(fast_rcnn_stage2_out['model_path'], final_path)
print 'Final model: {}'.format(final_path)67
只是对上一步模型输出的一个拷贝。
至此,整个faster-rcnn的训练过程就结束了。
AnchorTargetLayer和ProposalLayer
前面说过还有这两个层没有说明,一个是anchortarget layer一个是proposal layer,下面逐一简要分析。
class AnchorTargetLayer(caffe.Layer)11
首先是读取参数,在prototxt,实际上只读取了param_str: “‘feat_stride’: 16”,这是个很重要的参数,目前我的理解是滑块滑动的大小,对于识别物体的大小很有用,比如小物体的识别,需要把这个参数减小等。
首先 setup部分,
anchor_scales = layer_params.get('scales', (8, 16, 32))
self._anchors = generate_anchors(scales=np.array(anchor_scales))1212
调用generate_anchors方法生成最初始的9个anchor该函数位于generate_anchors.py 主要功能是生成多尺度,多宽高比的anchors,8,16,32其实就是scales:[2^3 2^4 2^5],base_size为16,具体是怎么实现的可以查阅源代码。_ratio_enum()部分生成三种宽高比 1:2,1:1,2:1的anchor如下图所示:(以下参考
另外一篇博客)
_scale_enum()部分,生成三种尺寸的anchor,以_ratio_enum()部分生成的anchor[0 0 15 15]为例,扩展了三种尺度 128*128,256*256,512*512,如下图所示:
另外一个函数就是forward()。
在faster rcnn中会根据不同图的输入,得到不同的feature map,height, width = bottom[0].data.shape[-2:]首先得到conv5的高宽,以及gt box gt_boxes = bottom[1].data,图片信息im_info = bottom[2].data[0, :],然后计算偏移量,shift_x = np.arange(0, width) * self._feat_stride,在这里,你会发现,例如你得到的fm是H=61,W=36,然后你乘以16,得到的图形大概就是,其实这个16大概就是网络的缩放比例。接下来就是生成anchor,以及对anchor做一定的筛选,详见代码。
另外一个需要理解的就是proposal layer,这个只是在测试的时候用,许多东西和AnchorTargetLayer类似,不详细介绍,可以查看代码。主要看看forward函数,函数算法介绍在注释部分写的很详细:
# Algorithm:
# for each (H, W) location i
generate A anchor boxes centered on cell i
apply predicted bbox deltas at cell i to each of the A anchors
# clip predicted boxes to image
# remove predicted boxes with either height or width & threshold
# sort all (proposal, score) pairs by score from highest to lowest
# take top pre_nms_topN proposals before NMS
# apply NMS with threshold 0.7 to remaining proposals
# take after_nms_topN proposals after NMS
# return the top proposals (-& RoIs top, scores top)11
在这个函数中会引用NMS方法。
代码文件夹说明
http://blog.csdn.net/u/article/category/6237110
http://blog.csdn.net/sunyiyou9/article/category/6269359
http://blog.csdn.net/bailufeiyan/article/details/
原文地址:
http://blog.csdn.net/u/article/details/
优质网站模板18:13 提问
faster rcnn训练的时候应该是哪个层出了问题
echo Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt._01-16-47
Logging output to experiments/logs/faster_rcnn_alt_opt_ZF_.txt._01-16-47
./tools/train_faster_rcnn_alt_opt.py --gpu 0 --net_name ZF --weights data/imagenet_models/CaffeNet.v2.caffemodel --imdb voc_2007_trainval --cfg experiments/cfgs/faster_rcnn_alt_opt.yml
Called with args:
Namespace(cfg_file='experiments/cfgs/faster_rcnn_alt_opt.yml', gpu_id=0, imdb_name='voc_2007_trainval', net_name='ZF', pretrained_model='data/imagenet_models/CaffeNet.v2.caffemodel', set_cfgs=None)
Stage 1 RPN, init from ImageNet model
Init model: data/imagenet_models/CaffeNet.v2.caffemodel
Using config:
{'DATA_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master\data',
'DEDUP_BOXES': 0.0625,
'EPS': 1e-14,
'EXP_DIR': 'default',
'GPU_ID': 0,
'MATLAB': 'matlab',
'MODELS_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master\models\pascal_voc',
'PIXEL_MEANS': array([[[ 102.9801,
122.7717]]]),
'RNG_SEED': 3,
'ROOT_DIR': 'E:\caffe-frcnn\py-faster-rcnn-master',
'TEST': {'BBOX_REG': True,
'HAS_RPN': False,
'MAX_SIZE': 1000,
'NMS': 0.3,
'PROPOSAL_METHOD': 'selective_search',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 300,
'RPN_PRE_NMS_TOP_N': 6000,
'SCALES': [600],
'SVM': False},
'TRAIN': {'ASPECT_GROUPING': True,
'BATCH_SIZE': 128,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': False,
'BBOX_REG': False,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.1,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'MAX_SIZE': 1000,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 16,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [600],
'SNAPSHOT_INFIX': '',
'SNAPSHOT_ITERS': 10000,
'USE_FLIPPED': True,
'USE_PREFETCH': False},
'USE_GPU_NMS': True}
Loaded dataset voc_2007_trainval for training
Set proposal method: gt
Appending horizontally-flipped training examples...
voc_2007_trainval gt roidb loaded from E:\caffe-frcnn\py-faster-rcnn-master\data\cache\voc_2007_trainval_gt_roidb.pkl
Preparing training data...
roidb len: 100
Output will be saved to E:\caffe-frcnn\py-faster-rcnn-master\output\default\voc_2007_trainval
Filtered 0 roidb entries: 100 -& 100
WARNING: Logging before InitGoogleLogging() is written to STDERR
I:54.40 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
I:55.40 solver.cpp:44] Initializing solver from parameters:
train_net: "models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt"
base_lr: 0.001
display: 20
lr_policy: "step"
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
stepsize: 60000
snapshot: 0
snapshot_prefix: "zf_rpn"
average_loss: 100
I:55.40 solver.cpp:77] Creating training net from train_net file: models/pascal_voc/ZF/faster_rcnn_alt_opt/stage1_rpn_train.pt
I:55.40 net.cpp:51] Initializing net from parameters:
name: "ZF"
phase: TRAIN
name: "input-data"
type: "Python"
top: "data"
top: "im_info"
top: "gt_boxes"
python_param {
module: "roi_data_layer.layer"
layer: "RoIDataLayer"
param_str: "\'num_classes\': 2"
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 96
kernel_size: 7
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
name: "norm1"
type: "LRN"
bottom: "conv1"
top: "norm1"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
engine: CAFFE
name: "pool1"
type: "Pooling"
bottom: "norm1"
top: "pool1"
pooling_param {
kernel_size: 3
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 256
kernel_size: 5
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
name: "norm2"
type: "LRN"
bottom: "conv2"
top: "norm2"
lrn_param {
local_size: 3
alpha: 5e-05
beta: 0.75
norm_region: WITHIN_CHANNEL
engine: CAFFE
name: "pool2"
type: "Pooling"
bottom: "norm2"
top: "pool2"
pooling_param {
kernel_size: 3
name: "conv3"
type: "Convolution"
bottom: "pool2"
top: "conv3"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 384
kernel_size: 3
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
name: "conv4"
type: "Convolution"
bottom: "conv3"
top: "conv4"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 384
kernel_size: 3
name: "relu4"
type: "ReLU"
bottom: "conv4"
top: "conv4"
name: "conv5"
type: "Convolution"
bottom: "conv4"
top: "conv5"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 256
kernel_size: 3
name: "relu5"
type: "ReLU"
bottom: "conv5"
top: "conv5"
name: "rpn_conv1"
type: "Convolution"
bottom: "conv5"
top: "rpn_conv1"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 256
kernel_size: 3
weight_filler {
type: "gaussian"
bias_filler {
type: "constant"
name: "rpn_relu1"
type: "ReLU"
bottom: "rpn_conv1"
top: "rpn_conv1"
name: "rpn_cls_score"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_cls_score"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 18
kernel_size: 1
weight_filler {
type: "gaussian"
bias_filler {
type: "constant"
name: "rpn_bbox_pred"
type: "Convolution"
bottom: "rpn_conv1"RoiDataLayer: name_to_top: {'gt_boxes': 2, 'data': 0, 'im_info': 1}
top: "rpn_bbox_pred"
lr_mult: 1
lr_mult: 2
convolution_param {
num_output: 36
kernel_size: 1
weight_filler {
type: "gaussian"
bias_filler {
type: "constant"
name: "rpn_cls_score_reshape"
type: "Reshape"
bottom: "rpn_cls_score"
top: "rpn_cls_score_reshape"
reshape_param {
name: "rpn-data"
type: "Python"
bottom: "rpn_cls_score"
bottom: "gt_boxes"
bottom: "im_info"
bottom: "data"
top: "rpn_labels"
top: "rpn_bbox_targets"
top: "rpn_bbox_inside_weights"
top: "rpn_bbox_outside_weights"
python_param {
module: "rpn.anchor_target_layer"
layer: "AnchorTargetLayer"
param_str: "\'feat_stride\': 16"
name: "rpn_loss_cls"
type: "SoftmaxWithLoss"
bottom: "rpn_cls_score_reshape"
bottom: "rpn_labels"
top: "rpn_cls_loss"
loss_weight: 1
propagate_down: true
propagate_down: false
loss_param {
ignore_label: -1
normalize: true
name: "rpn_loss_bbox"
type: "SmoothL1Loss"
bottom: "rpn_bbox_pred"
bottom: "rpn_bbox_targets"
bottom: "rpn_bbox_inside_weights"
bottom: "rpn_bbox_outside_weights"
top: "rpn_loss_bbox"
loss_weight: 1
smooth_l1_loss_param {
name: "dummy_roi_pool_conv5"
type: "DummyData"
top: "dummy_roi_pool_conv5"
dummy_data_param {
data_filler {
type: "gaussian"
name: "fc6"
type: "InnerProduct"
bottom: "dummy_roi_pool_conv5"
top: "fc6"
lr_mult: 0
decay_mult: 0
lr_mult: 0
decay_mult: 0
inner_product_param {
num_output: 4096
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
lr_mult: 0
decay_mult: 0
lr_mult: 0
decay_mult: 0
inner_product_param {
num_output: 4096
name: "silence_fc7"
type: "Silence"
bottom: "fc7"
I:55.40 layer_factory.cpp:58] Creating layer input-data
I:55.40 net.cpp:84] Creating Layer input-data
I:55.40 net.cpp:380] input-data -& data
I:55.40 net.cpp:380] input-data -& im_info
I:55.40 net.cpp:380] input-data -& gt_boxes
I:55.40 net.cpp:122] Setting up input-data
I:55.40 net.cpp:129] Top shape: 1 3 600 0)
I:55.40 net.cpp:129] Top shape: 1 3 (3)
I:55.40 net.cpp:129] Top shape: 1 4 (4)
I:55.40 net.cpp:137] Memory required for data: 7200028
I:55.40 layer_factory.cpp:58] Creating layer data_input-data_0_split
I:55.40 net.cpp:84] Creating Layer data_input-data_0_split
I:55.40 net.cpp:406] data_input-data_0_split &- data
I:55.40 net.cpp:380] data_input-data_0_split -& data_input-data_0_split_0
I:55.40 net.cpp:380] data_input-data_0_split -& data_input-data_0_split_1
I:55.40 net.cpp:122] Setting up data_input-data_0_split
I:55.40 net.cpp:129] Top shape: 1 3 600 0)
I:55.40 net.cpp:129] Top shape: 1 3 600 0)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer conv1
I:55.40 net.cpp:84] Creating Layer conv1
I:55.40 net.cpp:406] conv1 &- data_input-data_0_split_0
I:55.40 net.cpp:380] conv1 -& conv1
I:55.40 net.cpp:122] Setting up conv1
I:55.40 net.cpp:129] Top shape: 1 96 300 500 ()
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer relu1
I:55.40 net.cpp:84] Creating Layer relu1
I:55.40 net.cpp:406] relu1 &- conv1
I:55.40 net.cpp:367] relu1 -& conv1 (in-place)
I:55.40 net.cpp:122] Setting up relu1
I:55.40 net.cpp:129] Top shape: 1 96 300 500 ()
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer norm1
I:55.40 net.cpp:84] Creating Layer norm1
I:55.40 net.cpp:406] norm1 &- conv1
I:55.40 net.cpp:380] norm1 -& norm1
I:55.40 net.cpp:122] Setting up norm1
I:55.40 net.cpp:129] Top shape: 1 96 300 500 ()
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer pool1
I:55.40 net.cpp:84] Creating Layer pool1
I:55.40 net.cpp:406] pool1 &- norm1
I:55.40 net.cpp:380] pool1 -& pool1
I:55.40 net.cpp:122] Setting up pool1
I:55.40 net.cpp:129] Top shape: 1 96 151 251 (3638496)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer conv2
I:55.40 net.cpp:84] Creating Layer conv2
I:55.40 net.cpp:406] conv2 &- pool1
I:55.40 net.cpp:380] conv2 -& conv2
I:55.40 net.cpp:122] Setting up conv2
I:55.40 net.cpp:129] Top shape: 1 256 76 126 (2451456)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer relu2
I:55.40 net.cpp:84] Creating Layer relu2
I:55.40 net.cpp:406] relu2 &- conv2
I:55.40 net.cpp:367] relu2 -& conv2 (in-place)
I:55.40 net.cpp:122] Setting up relu2
I:55.40 net.cpp:129] Top shape: 1 256 76 126 (2451456)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer norm2
I:55.40 net.cpp:84] Creating Layer norm2
I:55.40 net.cpp:406] norm2 &- conv2
I:55.40 net.cpp:380] norm2 -& norm2
I:55.40 net.cpp:122] Setting up norm2
I:55.40 net.cpp:129] Top shape: 1 256 76 126 (2451456)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer pool2
I:55.40 net.cpp:84] Creating Layer pool2
I:55.40 net.cpp:406] pool2 &- norm2
I:55.40 net.cpp:380] pool2 -& pool2
I:55.40 net.cpp:122] Setting up pool2
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer conv3
I:55.40 net.cpp:84] Creating Layer conv3
I:55.40 net.cpp:406] conv3 &- pool2
I:55.40 net.cpp:380] conv3 -& conv3
I:55.40 net.cpp:122] Setting up conv3
I:55.40 net.cpp:129] Top shape: 1 384 39 64 (958464)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer relu3
I:55.40 net.cpp:84] Creating Layer relu3
I:55.40 net.cpp:406] relu3 &- conv3
I:55.40 net.cpp:367] relu3 -& conv3 (in-place)
I:55.40 net.cpp:122] Setting up relu3
I:55.40 net.cpp:129] Top shape: 1 384 39 64 (958464)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer conv4
I:55.40 net.cpp:84] Creating Layer conv4
I:55.40 net.cpp:406] conv4 &- conv3
I:55.40 net.cpp:380] conv4 -& conv4
I:55.40 net.cpp:122] Setting up conv4
I:55.40 net.cpp:129] Top shape: 1 384 39 64 (958464)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer relu4
I:55.40 net.cpp:84] Creating Layer relu4
I:55.40 net.cpp:406] relu4 &- conv4
I:55.40 net.cpp:367] relu4 -& conv4 (in-place)
I:55.40 net.cpp:122] Setting up relu4
I:55.40 net.cpp:129] Top shape: 1 384 39 64 (958464)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer conv5
I:55.40 net.cpp:84] Creating Layer conv5
I:55.40 net.cpp:406] conv5 &- conv4
I:55.40 net.cpp:380] conv5 -& conv5
I:55.40 net.cpp:122] Setting up conv5
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer relu5
I:55.40 net.cpp:84] Creating Layer relu5
I:55.40 net.cpp:406] relu5 &- conv5
I:55.40 net.cpp:367] relu5 -& conv5 (in-place)
I:55.40 net.cpp:122] Setting up relu5
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_conv1
I:55.40 net.cpp:84] Creating Layer rpn_conv1
I:55.40 net.cpp:406] rpn_conv1 &- conv5
I:55.40 net.cpp:380] rpn_conv1 -& rpn_conv1
I:55.40 net.cpp:122] Setting up rpn_conv1
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_relu1
I:55.40 net.cpp:84] Creating Layer rpn_relu1
I:55.40 net.cpp:406] rpn_relu1 &- rpn_conv1
I:55.40 net.cpp:367] rpn_relu1 -& rpn_conv1 (in-place)
I:55.40 net.cpp:122] Setting up rpn_relu1
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_conv1_rpn_relu1_0_split
I:55.40 net.cpp:84] Creating Layer rpn_conv1_rpn_relu1_0_split
I:55.40 net.cpp:406] rpn_conv1_rpn_relu1_0_split &- rpn_conv1
I:55.40 net.cpp:380] rpn_conv1_rpn_relu1_0_split -& rpn_conv1_rpn_relu1_0_split_0
I:55.40 net.cpp:380] rpn_conv1_rpn_relu1_0_split -& rpn_conv1_rpn_relu1_0_split_1
I:55.40 net.cpp:122] Setting up rpn_conv1_rpn_relu1_0_split
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:129] Top shape: 1 256 39 64 (638976)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_cls_score
I:55.40 net.cpp:84] Creating Layer rpn_cls_score
I:55.40 net.cpp:406] rpn_cls_score &- rpn_conv1_rpn_relu1_0_split_0
I:55.40 net.cpp:380] rpn_cls_score -& rpn_cls_score
I:55.40 net.cpp:122] Setting up rpn_cls_score
I:55.40 net.cpp:129] Top shape: 1 18 39 64 (44928)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_cls_score_rpn_cls_score_0_split
I:55.40 net.cpp:84] Creating Layer rpn_cls_score_rpn_cls_score_0_split
I:55.40 net.cpp:406] rpn_cls_score_rpn_cls_score_0_split &- rpn_cls_score
I:55.40 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -& rpn_cls_score_rpn_cls_score_0_split_0
I:55.40 net.cpp:380] rpn_cls_score_rpn_cls_score_0_split -& rpn_cls_score_rpn_cls_score_0_split_1
I:55.40 net.cpp:122] Setting up rpn_cls_score_rpn_cls_score_0_split
I:55.40 net.cpp:129] Top shape: 1 18 39 64 (44928)
I:55.40 net.cpp:129] Top shape: 1 18 39 64 (44928)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_bbox_pred
I:55.40 net.cpp:84] Creating Layer rpn_bbox_pred
I:55.40 net.cpp:406] rpn_bbox_pred &- rpn_conv1_rpn_relu1_0_split_1
I:55.40 net.cpp:380] rpn_bbox_pred -& rpn_bbox_pred
I:55.40 net.cpp:122] Setting up rpn_bbox_pred
I:55.40 net.cpp:129] Top shape: 1 36 39 64 (89856)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_cls_score_reshape
I:55.40 net.cpp:84] Creating Layer rpn_cls_score_reshape
I:55.40 net.cpp:406] rpn_cls_score_reshape &- rpn_cls_score_rpn_cls_score_0_split_0
I:55.40 net.cpp:380] rpn_cls_score_reshape -& rpn_cls_score_reshape
I:55.40 net.cpp:122] Setting up rpn_cls_score_reshape
I:55.40 net.cpp:129] Top shape: 1 2 351 64 (44928)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn-data
I:55.40 net.cpp:84] Creating Layer rpn-data
I:55.40 net.cpp:406] rpn-data &- rpn_cls_score_rpn_cls_score_0_split_1
I:55.40 net.cpp:406] rpn-data &- gt_boxes
I:55.40 net.cpp:406] rpn-data &- im_info
I:55.40 net.cpp:406] rpn-data &- data_input-data_0_split_1
I:55.40 net.cpp:380] rpn-data -& rpn_labels
I:55.40 net.cpp:380] rpn-data -& rpn_bbox_targets
I:55.40 net.cpp:380] rpn-data -& rpn_bbox_inside_weights
I:55.40 net.cpp:380] rpn-data -& rpn_bbox_outside_weights
I:55.40 net.cpp:122] Setting up rpn-data
I:55.40 net.cpp:129] Top shape: 1 1 351 64 (22464)
I:55.40 net.cpp:129] Top shape: 1 36 39 64 (89856)
I:55.40 net.cpp:129] Top shape: 1 36 39 64 (89856)
I:55.40 net.cpp:129] Top shape: 1 36 39 64 (89856)
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_loss_cls
I:55.40 net.cpp:84] Creating Layer rpn_loss_cls
I:55.40 net.cpp:406] rpn_loss_cls &- rpn_cls_score_reshape
I:55.40 net.cpp:406] rpn_loss_cls &- rpn_labels
I:55.40 net.cpp:380] rpn_loss_cls -& rpn_cls_loss
I:55.40 layer_factory.cpp:58] Creating layer rpn_loss_cls
I:55.40 net.cpp:122] Setting up rpn_loss_cls
I:55.40 net.cpp:129] Top shape: (1)
I:55.40 net.cpp:132]
with loss weight 1
I:55.40 net.cpp:137] Memory required for data:
I:55.40 layer_factory.cpp:58] Creating layer rpn_loss_bbox
I:55.40 net.cpp:84] Creating Layer rpn_loss_bbox
I:55.40 net.cpp:406] rpn_loss_bbox &- rpn_bbox_pred
I:55.40 net.cpp:406] rpn_loss_bbox &- rpn_bbox_targets
I:55.*** Check failure stack trace: ***
其他相关推荐}

我要回帖

更多关于 caffe py faster rcnn 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信