Follow these steps to set up the project:
- Clone the repo into both your local and Newton account as there will be transferring of between local and Newton accounts.
- Create a virtual environment using conda:
conda create -n military_bt python=3.10
- Navigate to the backend directory and install the required packages:
cd backend pip install -r requirements.txt - Pull the CodeLLaMa model from Hugging Face into your local directory using the Hugging Face CLI:
huggingface-cli download codellama/CodeLlama-7b-Instruct-hf
- If you don't have the Hugging Face CLI installed, follow the instructions here to install and learn more.
- Navigate to your cache located at
/home/username/.cache/huggingface/huband locate the CodeLLaMa model. - Copy the model into the
models/codellamadirectory.
- Set up your
.envfile with the following variables:PINECONE_API_KEY= REMOTE_HOST=newton.ist.ucf.edu REMOTE_USER= SSH_KEY_PATH= REMOTE_DIR= # create a directory in Newton for the files to copy over BASE_MODEL_PATH= # This should contain the path to the directory within the snapshots folder LORA_ADAPTER_PATH=codellama-bt-adapter
- Run the shell script to submit the job to Newton:
./start_dev.sh