Skip to main content

🚧 Cortex is under construction.

cortex pull

This command facilitates downloading machine learning models from various model hubs, including the popular 🤗 Hugging Face.

By default, models are downloaded to the `node_modules library path. For additional information on storage paths and options, refer here.


This command is compatible with all OpenAI and OpenAI-compatible endpoints.


The following alias is also available for downloading models:

  • cortex download _


Preconfigured Models

Reconfigured models (with optimal runtime parameters and templates) are available from the Jan Model Hub on Hugging Face.

Models can be downloaded using a Docker-like interface with the following syntax: repo_name:branch_name. Each variant may include different quantizations and sizes, typically organized in the repository’s branches.

Available models include llama3, mistral, tinyllama, and many more.


New models will soon be added to HuggingFace's janhq repository.

# Pull a specific variant with `repo_name:branch`
cortex pull llama3:7b

You can also download size, format, and quantization variants of each model.

cortex pull llama3:8b-instruct-v3-gguf-Q4_K_M
cortex pull llama3:8b-instruct-v3-tensorrt-llm


Model variants are provided via the branches in each model's Hugging Face repo.

Hugging Face Models

You can download any GGUF, TensorRT, or supported-format model directly from Hugging Face.

# cortex pull org_name/repo_name
cortex pull microsoft/Phi-3-mini-4k-instruct-gguf


-h, --helpDisplay help information for the command.No