Hi,
I was following the tutorial for installing vak using conda for MacOS and on Linux. While the conda installation did not produce any errors, the program produces a segmentation fault error when typing in the commands.
Also, can you please enter the commands below on both Mac and Linux, and reply with the environment files they produce?
conda activate vak-env # if you haven't already
conda env export > environment.yml
conda list --explicit > spec-file.txt
Please name the files differently to indicate which operating system you created them on, e.g.
on mac
conda activate vak-env # if you haven't already
conda env export > mac-environment.yml
conda list --explicit > mac-spec-file.txt
on linux
conda activate vak-env # if you haven't already
conda env export > linux-environment.yml
conda list --explicit > linux-spec-file.txt
Thank you!
I should be able to use those files to recreate the environment and see if I can reproduce the error.
I remember that @mizuki had a similar error on WSL recently but AFAIK installs on Mac and Linux should be working! Sorry again that you’re running into this, we will get it sorted out
Replying to myself to save @YMK123 the work since I failed to realize people couldn’t attach files because of the default Discourse settings (Thank you @YMK123 for pointing us to the right Discourse meta post: Uploading attachments to topics - support - Discourse Meta – for future reference this is under Settings > Files)
I can confirm that I used the same conda commands, and did not use pip.
ok makes me think this is something conda forge-specific
(Adding sections here instead of replying to myself five times in a row)
Mac env
I started with the MacOS environment and I think I can reproduce the error that @YMK123 observed:
(young-mi-env) davidnicholson@Davids-MacBook-Pro î‚° ~/Downloads î‚°
$ vak --help
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
[1] 22865 abort vak --help
Ok now I am replying separately because I want you to make sure you see this @YMK123
Can you please try creating a new environment and testing the install with the following commands?
On MacOS:
conda create -n vak-env python=3.9 # because it's what you had before
conda activate vak-env
conda install pytorch torchvision -c pytorch
conda install vak tweetynet -c conda-forge
vak --help
When I did this it worked for me on Mac.
The key difference here is that we’re installing pytorch and torchvisionfirst from the pytorch channel.
We fixed a similar issue the same way before:
I think what’s going on is that when we just specify the conda-forge channel we get some version of pytorch that’s built by conda-forge, which we do not want.
I need to confirm that though.
The fact that this is now affecting multiple people is a reason to figure that out sooner rather than later.
To try on Linux you’ll want to modify the commands as appropriate for your machine.
Not sure if you have a GPU; you can go to Start Locally | PyTorch on that machine and the site should let you pick here:
Basically you’ll need to say either conda install pytorch torchvision cpuonly -c pytorch if you’re not using a GPU or specify the cudatoolkit you need if you are using a GPU, conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
@nicholdav will confirm or correct me if I’m wrong. There should be a preference to use using pytorch on linux > windows > Mac.
I don’t think there should be a preference!
There are at least a few people training on Mac that I know of.
But @yardenc you are right that there is a bit of a “bias” where it’s easier to install on Unix systems, unfortunately
One good thing about TweetyNet is that for the use cases we tested it’s pretty lightweight and you don’t necessarily need a GPU. @YMK123 that could of course be different if your data is not an acoustically isolated individual. But as a very rough rule of thumb if you’re training on CPU then it will probably be a couple hours to overnight to train a model, versus ~half an hour to an hour on a GPU.
@nicholdav Thanks, the installation of pytorch and torchvision before vak seems to do the trick! While I was able to run through the remainder of the tutorial smoothly, I had some trouble with vak prep on my own pilot dataset that I’ll add as a separate post to keep the issues separate.
Previously, I was able to train a CNN model on the Mac through keras which took a few hours, but I haven’t tried any RNN-based model. The recordings are from single individuals, but I hope to train the model across individuals which may also add to the computational time.