Ollama on Biowulf

Ollama is a command line tool that allows users to run LLMs locally.

  • It can be used in many ways: interactive shell, API, Python library.
  • It contains pre-built models that can be easily used in a variety of applications, including Llama4, Mistral and Gemma.
  • Will use a GPU if there is one, otherwise will fallback to CPU.
    Documentation
    Important Notes

    Hardware requirements

    Quantization considerations: 4-bit quantization reduces memory to ~25% of original. So it is highly recommended to use.
    Model SizeVRAM (FP16)VRAM (4-bit)GPU type
    1–3B4-6GB~2GBK80,P100,V100,V100x,A100
    7–8B14-16GB~6-8GBP100,V100,V100x,A100
    13-14B26-28GB~12-16GBV100x,A100
    70B+140GB+~35-40GBA100(4-bit)
    Interactive job
    Interactive jobs should be used for debugging, graphics, or applications that cannot be run as batch jobs.

    Allocate an interactive session and run the program.
    Sample session (user input in bold):

    [user@biowulf]$ sinteractive --gres=gpu:1,lscratch:10 --constraint="gpuv100|gpuv100x|gpua100" -c 8 --mem=10g --tunnel 
    salloc.exe: Pending job allocation 46116226
    salloc.exe: job 46116226 queued and waiting for resources
    salloc.exe: job 46116226 has been allocated resources
    salloc.exe: Granted job allocation 46116226
    salloc.exe: Waiting for resource configuration
    salloc.exe: Nodes cn3144 are ready for job
    
    [user@cn3144 ~]$ module load ollama
    
    [user@cn3144 ~]$ cd /data/$USER/
    
    [user@cn3144 ~]$ ollama_start
    Running ollama on localhost:xxxxx
    
    ######################################
    export OLLAMA_HOST=localhost:xxxxx
    ######################################
    
    [user@cn3114 ~]$ export OLLAMA_HOST=localhost:xxxxx # or "source $SLURM_JOB_ID/ollama.sh"
    [user@cn3114 ~]$ ollama list
    [user@cn3114 ~]$ ollama pull gemma3:1b
    [user@cn3114 ~]$ ollama run gemma3:1b
    ###enter prompts
    what is long read sequencing
    [user@cn3114 ~]$ ###runs the gemma3:1b with the prompt and passes the response into a file called response.txt
    [user@cn3114 ~]$ ollama run gemma3:1b what is long read sequencing > response.txt
    [user@cn3114 ~]$ ollama_stop
    Terminated
    

    Batch job
    Most jobs should be run as batch jobs.

    Create a batch input file (e.g. ollama_job.sh). For example:

    #!/bin/bash
    set -e
    module load ollama
    cd /data/$USER
    ollama_start
    sleep 2
    source $SLURM_JOB_ID/ollama.sh
    ollama run gemma3:1b what is long read sequencing > response.txt
    ollama_stop
    

    Submit this job using the Slurm sbatch command.

    sbatch --partition=gpu --gres=gpu:1,lscratch:10 --constraint="gpuv100|gpuv100x|gpua100" -c 8 --mem=10g ollama_job.sh