Documentation - API Reference - Changelog - Bug reports - Discord
⚠️ Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.
Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like Jan.ai.
Cortex.cpp is a multi-engine that uses llama.cpp
as the default engine but also supports the following:
You can install a nightly (unstable) version of Cortex from Discord here: https://discord.gg/nGp6PMrUqS
Cortex.cpp supports various models available on the Cortex Hub. Once downloaded, all model source files will be stored in ~\cortexcpp\models
.
Example models:
Model | llama.cpp:gguf |
TensorRT:tensorrt |
ONNXRuntime:onnx |
Command |
---|---|---|---|---|
llama3.1 | ✅ | ✅ | cortex run llama3.1:gguf | |
llama3 | ✅ | ✅ | ✅ | cortex run llama3 |
mistral | ✅ | ✅ | ✅ | cortex run mistral |
qwen2 | ✅ | cortex run qwen2:7b-gguf | ||
codestral | ✅ | cortex run codestral:22b-gguf | ||
command-r | ✅ | cortex run command-r:35b-gguf | ||
gemma | ✅ | ✅ | cortex run gemma | |
mixtral | ✅ | cortex run mixtral:7x8b-gguf | ||
openhermes-2.5 | ✅ | ✅ | ✅ | cortex run openhermes-2.5 |
phi3 (medium) | ✅ | ✅ | cortex run phi3:medium | |
phi3 (mini) | ✅ | ✅ | cortex run phi3:mini | |
tinyllama | ✅ | cortex run tinyllama:1b-gguf |
Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.
For complete details on CLI commands, please refer to our CLI documentation.
Cortex.cpp includes a REST API accessible at localhost:3928
. For a complete list of endpoints and their usage, visit our API documentation.
- Navigate to
Add or Remove Programs
. - Search for Cortex.cpp and click
Uninstall
.
Run the uninstaller script:
sudo sh cortex-uninstall.sh
sudo apt remove cortexcpp
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h
- Clone the Cortex.cpp repository here.
- Navigate to the
engine > vcpkg
folder. - Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
- Build the Cortex.cpp inside the
build
folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
- Use Visual Studio with the C++ development kit to build the project using the files generated in the
build
folder. - Verify that Cortex.cpp is installed correctly by getting help information.
# Get help
cortex
- For support, please file a GitHub ticket.
- For questions, join our Discord here.
- For long-form inquiries, please email hello@jan.ai.