Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'models' #353

Closed
zjZSTU opened this issue Jul 10, 2020 · 15 comments
Closed

ModuleNotFoundError: No module named 'models' #353

zjZSTU opened this issue Jul 10, 2020 · 15 comments
Assignees
Labels
enhancement New feature or request question Further information is requested Stale

Comments

@zjZSTU
Copy link

zjZSTU commented Jul 10, 2020

ModuleNotFoundError: No module named 'models'

hi yolov5, i met this problem when i try to use the model in my project. the question has solved but i think it's enough classical to open a new issue to describe it

repreduct

in yolov5 repo, the infer file is detect.py and the model is ./weights/yolov5s.pt. The complete detection code is as follows

python detect.py --source ./inference/images/ --weights yolov5s.pt --conf 0.4

i retrained the model and put it in my repo: ./weights/best.pt. and i put yolov5's models and utils in my repo

└── yolov5
    ├── detect.py
    ├── models
    ├── __pycache__
    └── utils

i use ./demo/test.py to predict and there is something wrong with the program

reason

refer to

torch.load() requires model module in the same folder #3678

ModuleNotFoundError: No module named 'models' #18325

Pytorch.load() error:No module named ‘model’

the key of this problem is Pickle need the same file structure between the calling file and the source file if use the following code to save model

torch.save(model, PATH)

i checked the source file train.py

            # Save last, best and delete
            torch.save(ckpt, last)
            if (best_fitness == fi) and not final_epoch:
                torch.save(ckpt, best)
            del ckpt

Yes, that's it

solution

use sys.path to include the source file. do it like this:

import sys
sys.path.insert(0, './yolov5')
@zjZSTU zjZSTU added the question Further information is requested label Jul 10, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Jul 10, 2020

Hello @zjZSTU, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher glenn-jocher added the enhancement New feature or request label Jul 10, 2020
@glenn-jocher
Copy link
Member

@zjZSTU hi there! That's interesting, so if I understand correctly to use a custom trained model in your repo the easiest route is simply to add ./yolov5 repo to the system path with the command you've supplied (so that torch can find the correct yolov5 modules)? Is this python command equivalent to running the bash command export PYTHONPATH="$PWD" from within the yolov5 repo?

@zjZSTU
Copy link
Author

zjZSTU commented Jul 11, 2020

@zjZSTU hi there! That's interesting, so if I understand correctly to use a custom trained model in your repo the easiest route is simply to add ./yolov5 repo to the system path with the command you've supplied (so that torch can find the correct yolov5 modules)? Is this python command equivalent to running the bash command export PYTHONPATH="$PWD" from within the yolov5 repo?

More in detail

as i mentioned before, i put the yolov5 source code to my repo

└── yolov5
    ├── detect.py
    ├── models
    ├── __pycache__
    └── utils
...
...
├── weights
│   ├── best.pt
...
...
├── demo
│   ├── test.py

use test.py to execute the detection, the complete command is as follows

python ./demo/test.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4

In order for Pickle to find the source file correctly, i add the following code in test.py

import sys
sys.path.insert(0, './yolov5')

summary

the key of solution this problem is the position relation of ./demo/test.py and ./yolov5/models/, so use the sys.path command or use PYTHONPATH ENV have the same effect

@priteshgohil
Copy link

priteshgohil commented Jul 21, 2020

@glenn-jocher Isn't it good if we save the model with state_dict?? It will make it independent of the folder structure and easy to load anywhere. Does it have a drawback if we do so?

@cao-nv
Copy link

cao-nv commented Aug 5, 2020

@glenn-jocher Isn't it good if we save the model with state_dict?? It will make it independent of the folder structure and easy to load anywhere. Does it have a drawback if we do so?

IMO there is no difference. I export and reload the state_dict of the saved checkpoint to load by the correct configuration file and see the same result.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 5, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@medasuryatej
Copy link

@zjZSTU hi there! That's interesting, so if I understand correctly to use a custom trained model in your repo the easiest route is simply to add ./yolov5 repo to the system path with the command you've supplied (so that torch can find the correct yolov5 modules)? Is this python command equivalent to running the bash command export PYTHONPATH="$PWD" from within the yolov5 repo?

More in detail

as i mentioned before, i put the yolov5 source code to my repo

└── yolov5
    ├── detect.py
    ├── models
    ├── __pycache__
    └── utils
...
...
├── weights
│   ├── best.pt
...
...
├── demo
│   ├── test.py

use test.py to execute the detection, the complete command is as follows

python ./demo/test.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4

In order for Pickle to find the source file correctly, i add the following code in test.py

import sys
sys.path.insert(0, './yolov5')

summary

the key of solution this problem is the position relation of ./demo/test.py and ./yolov5/models/, so use the sys.path command or use PYTHONPATH ENV have the same effect

Adding the PYTHONPATH to the location of yolov5 did the trick and worked for me.

@davesargrad
Copy link

davesargrad commented Apr 15, 2021

Hi Guys (@glenn-jocher I hope you can take a quick peek at my structure and make a suggestion).

I seem to be bit by this as well. I cant exactly see the solution. My directory structure is as follows:

./Main.py
./image_processing/multi_object_detector/models/
./image_processing/multi_object_detector/utils/
./image_processing/multi_object_detector/MultiObjectDetector.py

where models and utils are the folders from yolov5.

I make a call to attempt_load in MultiObjectDetector.py

def load_model(weights_file, device, imgsz):
    model = attempt_load(weights_file, map_location=device)  # load FP32 model

attempt_load then fails at the line that calls torch.load

def attempt_load(weights, map_location=None):
    # Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
    model = Ensemble()
    for w in weights if isinstance(weights, list) else [weights]:
        attempt_download(w)
        ckpt = torch.load(w, map_location=map_location)  # load
        model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval())  # FP32 model

I am trying to load yolov5s.pt
I've tried several different syntaxes for augmenting the system path.. none seem to work.

This is the most recent I've tried.

import sys
sys.path.insert(0, './image_processing/multi_object_detector')
sys.path.insert(0, './image_processing/multi_object_detector/models')

How should I update the path so that the torch.load works?

Without these imports I see this error;
ModuleNotFoundError("No module named 'models'")

With these imports I see this error:
ValueError('attempted relative import beyond top-level package')

In both cases its the call to the unpickler that fails:

result = unpickler.load()

@davesargrad
Copy link

davesargrad commented Apr 15, 2021

Ok.. So it looks like I cant put models and utils where I have. If I put them instead under a folder named yolov5 then I can get the model to load. I think the proper solution will be to save the model pickle file using the "state_dicts".. That seems much cleaner. I would suggest that the default yolov5s.pt file should conform to this more simplistic mechanism.

@davesargrad
Copy link

davesargrad commented Apr 15, 2021

Do you guys have a sample someplace that shows how to convert the detect code (detect.py) from using the pickled files as they stand today, to only loading the state_dict? I'd assume that this includes the breakout of the setup of the model that the state is loaded into.

@glenn-jocher
Copy link
Member

glenn-jocher commented Apr 15, 2021

@davesargrad for working with YOLOv5 models in custom python environments we recommend PyTorch Hub:

# Model
import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Imports
# custom imports here

# Images
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'

# Inference
results = model(img)

@davesargrad
Copy link

@glenn-jocher Ty So Much! Its truly a pleasure to see how responsive you are.

@jitaxis
Copy link

jitaxis commented Jan 15, 2023

@glenn-jocher Isn't it good if we save the model with state_dict?? It will make it independent of the folder structure and easy to load anywhere. Does it have a drawback if we do so?

IMO there is no difference. I export and reload the state_dict of the saved checkpoint to load by the correct configuration file and see the same result.

How do you convert a already save model, to using state_dict() and also how to load it using stae_dict(). I have tried but I seem to be missing keys, and the model doesn't look the same when loading resulting in a pretty much useless prediction.

@huni1023
Copy link

huni1023 commented Jan 25, 2023

@jitaxis Have you type model.eval() after loading your model?

@jitaxis
Copy link

jitaxis commented Jan 25, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

8 participants