본문 바로가기
논문 공부

Fast R-CNN 정리(4)

by Candy Lee 2021. 1. 7.

This paper is written by Ross Girshick(Microsoft Research)

 

 

 

논문 본문 정리내용 기록 남기기(개인 공부 포스팅입니다)

 

 

[Fast R-CNN Detection]

For each ROI r -> forward pass makes 2 outputs.

Output_1 = Posterior probability(사후 확률) p

Output_2 = Set of predicted bbox offsets

 

Apply non-maximum suppression for each class by using algorithm and settings from R-CNN

 

 

 

[Truncated SVD for faster detection]

Large Fully Connected (FC) Layers can be accelerated by compressing them with truncated SVD.

 

Reference of truncated SVD)

URL : bkshin.tistory.com/entry/%EB%A8%B8%EC%8B%A0%EB%9F%AC%EB%8B%9D-20-%ED%8A%B9%EC%9D%B4%EA%B0%92-%EB%B6%84%ED%95%B4Singular-Value-Decomposition

 

머신러닝 - 20. 특이값 분해(SVD)

이번 장에서는 특이값 분해(SVD)에 대해 알아보겠습니다. 고유값 분해에 대해 알고 있어야 특이값 분해를 이해할 수 있습니다. 고유값 분해를 잘 모르시는 분은 이전 장을 참고해주시기 바랍니다

bkshin.tistory.com

With this application of truncated SVD

=>Reduces parameter cound from uv -> t(u+v)

Reason : Dimension has been decreased.

 

To compress the network

1 FC connected with W(weight matrix)

-> has replaced by 2 FC layers.

 

This simple compression with truncated SVD 

affects good speedups when number of ROI is large..

 

 

 

 

[Which layers to fine-tune?]

In Fast R-CNN applies fine tuning.

->freeze 13 conv layers and let only fully connected layers can train datasets.

 

#Fine tuning#

Update weights of pre-trained model to customize it.

Because of parameter tuning, it can bring about overfitting of model

 

 

 

[Does multi-task training help?]

Multi-task training is convenient

->it does not manage pipeline of sequentially-trained tasks.

 

Across all 3 networks(S,M,L -> Test models in paper) improved 

pure classification accuracy by multi-task training.

(+0.8~+1.1 mAP points)

 

 

[Do we need more training data?]

It improves mAP figures.

 

 

[Do SVMs outperform softmax?]

Fast R-CNN(softmax)  vs  SVM

As you can see, softmax slightly outperforming SVM for all three networks.

This difference of figures is small.

But!

it demonstrates "one-shot" fine-tuning is sufficient compared to previous 

multi-stage training approaches.

 

 

 

[Are more proposals always better?]

Types of object detectors)

 

1. Sparse set of object proposals.

 

2. Dense set of object proposals.

 

=>This shows that swamping deep classifier with more object proposals does not help its accuracy.

 

So, more proposals are not always better.

 

 

 

 

[Conclusion]

This paper proposes Fast R-CNN algorithm on object detection.

And also, sparse object proposals appear to improve detector quality.

 

 

이상으로 Fast R-CNN 논문 공부 포스팅을 마치겠습니다.

중간에 생략한 내용들도 있습니다.

 

오늘도 감사합니다.

 

 

 

한글버전으로 R-CNN & Fast R-CNN 요약 정리본을 보시려면 아래를 클릭해 주세요!

candyz.tistory.com/20

 

R-CNN & Fast R-CNN 비교 정리

안녕하세요 Candy lee 입니다. 오랜만에 포스팅을 하는 것 같아요^^ 오늘은 이전에 리뷰해 본 R-CNN과 Fast R-CNN 논문 핵심 내용을 비교해서 정리해 보려 합니다. 우선, R-CNN과 Fast R-CNN 모두 객체 탐지(Ob

candyz.tistory.com

 

반응형

'논문 공부' 카테고리의 다른 글

AUTOVC 논문 정리(1)  (0) 2021.02.27
R-CNN & Fast R-CNN 비교 정리  (0) 2021.02.07
R-CNN 정리(3)  (0) 2021.01.21
R-CNN 정리(2)  (0) 2021.01.14
R-CNN 정리(1)  (0) 2021.01.11
Fast R-CNN 정리(3)  (0) 2021.01.04
Fast R-CNN 정리(2)  (0) 2021.01.03
Fast R-CNN 정리(1)  (0) 2021.01.02

댓글