You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2023. It is now read-only.
Hi
Thank you for your team to do such a nice work!
My team have trained a model with torchvision's faster-rcnn,and now we have to compress the model.And after some time's struggle,we finally decided to use distiller to do the work.Now We are facing the problem how to compress or accelerate the model,to prune or to quantize.In fact both of the methods are OK,But for some reason,we do not have so much time to do the work.
1.can you give me some reason how to do the work with less time ? to prune and to quantize ,which will take less time
2.We realize distiller give the api how to prune torchvision's faster-rcnn ,and I want to know how to prune with a different dataset
I'm new in distiller,maybe some expressions are not professional
Thank you for your reply .
The text was updated successfully, but these errors were encountered:
lrh454830526
changed the title
How to compress my object detection with torchvision
How to compress my object detection model
Aug 12, 2020
I have compressed my faster rcnn model with my own dataset, you can use the faster rcnn api in pytorch offered itself, add the command --model fasterrcnn_resnet50_fpn,but i met a problem with the compressed model size, whether compression_scheduler.state_dit() save the small model parameters or not? Why my model has the same size with different sparse prunning setting?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi
Thank you for your team to do such a nice work!
My team have trained a model with torchvision's faster-rcnn,and now we have to compress the model.And after some time's struggle,we finally decided to use distiller to do the work.Now We are facing the problem how to compress or accelerate the model,to prune or to quantize.In fact both of the methods are OK,But for some reason,we do not have so much time to do the work.
1.can you give me some reason how to do the work with less time ? to prune and to quantize ,which will take less time
2.We realize distiller give the api how to prune torchvision's faster-rcnn ,and I want to know how to prune with a different dataset
I'm new in distiller,maybe some expressions are not professional
Thank you for your reply .
The text was updated successfully, but these errors were encountered: