I was following the tutorial for installing vak using conda for MacOS and on Linux. While the conda installation did not produce any errors, the program produces a segmentation fault error when typing in the commands.
(Adding sections here instead of replying to myself five times in a row)
I started with the MacOS environment and I think I can reproduce the error that @YMK123 observed:
(young-mi-env) davidnicholson@Davids-MacBook-Pro ~/Downloads
$ vak --help
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
 22865 abort vak --help
Ok now I am replying separately because I want you to make sure you see this @YMK123
Can you please try creating a new environment and testing the install with the following commands?
conda create -n vak-env python=3.9 # because it's what you had before
conda activate vak-env
conda install pytorch torchvision -c pytorch
conda install vak tweetynet -c conda-forge
When I did this it worked for me on Mac.
The key difference here is that we’re installing pytorch and torchvisionfirst from the pytorch channel.
We fixed a similar issue the same way before:
I think what’s going on is that when we just specify the conda-forge channel we get some version of pytorch that’s built by conda-forge, which we do not want.
I need to confirm that though.
The fact that this is now affecting multiple people is a reason to figure that out sooner rather than later.
To try on Linux you’ll want to modify the commands as appropriate for your machine.
Not sure if you have a GPU; you can go to Start Locally | PyTorch on that machine and the site should let you pick here:
Basically you’ll need to say either conda install pytorch torchvision cpuonly -c pytorch if you’re not using a GPU or specify the cudatoolkit you need if you are using a GPU, conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
@nicholdav will confirm or correct me if I’m wrong. There should be a preference to use using pytorch on linux > windows > Mac.
I don’t think there should be a preference!
There are at least a few people training on Mac that I know of.
But @yardenc you are right that there is a bit of a “bias” where it’s easier to install on Unix systems, unfortunately
One good thing about TweetyNet is that for the use cases we tested it’s pretty lightweight and you don’t necessarily need a GPU. @YMK123 that could of course be different if your data is not an acoustically isolated individual. But as a very rough rule of thumb if you’re training on CPU then it will probably be a couple hours to overnight to train a model, versus ~half an hour to an hour on a GPU.
@nicholdav Thanks, the installation of pytorch and torchvision before vak seems to do the trick! While I was able to run through the remainder of the tutorial smoothly, I had some trouble with vak prep on my own pilot dataset that I’ll add as a separate post to keep the issues separate.
Previously, I was able to train a CNN model on the Mac through keras which took a few hours, but I haven’t tried any RNN-based model. The recordings are from single individuals, but I hope to train the model across individuals which may also add to the computational time.