@csseow Hi,
Thanks for trying out the reinstallation.
May I check with you if the same error also appears when running the code on your PC which does not have a GPU?
As the original implementation was already working with CPU, could you also try to check out the original code in a separate location and see if the code still works?
As I do not have full visibility with the full changes, it is kinda hard for me to debug, would you happen to have a repo with your changes for me to have a look at and see if I can replicate the error?
All things NLP
@raymond_aisg Hi raymond,
Sorry for the late reply as I was busy with NS for the past few days.
I have tried to use my laptop to run the code but another problem seems to appear with the srsly package derived from allennlp, which happens before the original torch problem.
The url links to my client and server github repositories are listed below:
https://github.com/seowcs/GraphNet-frontend.git
https://github.com/seowcs/GraphNet-backend.git
Please do note that the uploaded frontend repository should be working as intended, and the backend repo is the functional version which cuda is enabled for torch and working only on my desktop with a cuda compatible gpu. The problem originally arose from the '/extract' route where the preprocessor and lsr models are.
I truly do appreciate your help. Thank you.
@csseow Hi,
Thanks for sharing your code. I'm only able to investigate the LSR portion of the code base as I'm not familiar with the other portion of the solution.
I've copied the code from the `/extract` route function and tried replicating the issue on my local environment. I've encountered an issue when trying to install the dependencies from the `requirements.txt`, but I suspect this is simply because I'm running the code from MacOS and I was able to bypass the error by removing the `+cpu` suffix.
Once I've fixed the environment, I was able to run the code `/extract` route function without issues. I've attached the code I've copied out for your reference.
I do not think there is anything out of place with your implementation, which leaves only the dependencies which might have issues. May I suggest that you create a new virtual environment and reinstall all your required dependencies prior to re-running your solution to see if it fixes your issue.
All things NLP
@raymond_aisg Hi raymond,
Thank you so much for taking time out of your weekend to help me with my issue. Im elated to say that creating a new virtual environment has indeed worked for me! I really can't thank you enough. If you are free and its not too much trouble, is it ok if you can explain why installing the existing dependencies in a new virtual environment works so that i may use this knowledge in other projects? I really appreciate your help.
@csseow Hi,
I'm glad to hear that the solution works now.
There are many, many, possible reasons why setting up a new environment works. For example, there could be some poorly implemented packages that cache certain configs during the first run, then when the dependent package is updated, the cache wasn't updated resulting in unexpected errors. (my guess is this is most likely what happened in this case as the previous PyTorch installation expects CUDA support, overwriting the PyTorch version to a non-CUDA version results in some mismatch configs.)
Also, dependency resolution is a very hard problem to solve and the complexity goes up when a solution requires a lot of 3rd party packages. So when a package is updated in a virtual environment, the package management software (e.g. PIP, Conda) needs to reevaluate every single installed package dependency to ensure that their respective dependencies are also met (NP-Hard problem). This process tends to be problematic resulting in some dependency conflict that is unsolvable, the majority of the time the package management will flag it out when they encountered such issues, but quite often this kind of issue does not surface.
In my experience, re-install dependencies in a new virtual environment is the equivalent of 'turning the PC off and on again' to resolve unexplainable issues when developing in Python.
https://pip.pypa.io/en/stable/topics/more-dependency-resolution/
Hope this is useful.
All things NLP