Skip to content

janhq/cortex.cpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cortex.cpp

Cortex cpp's Readme Banner

GitHub commit activity Github Last Commit Github Contributors GitHub closed issues Discord

Documentation - API Reference - Changelog - Bug reports - Discord

⚠️ Cortex.cpp is currently in active development. This outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

Overview

Cortex.cpp is a Local AI engine that is used to run and customize LLMs. Cortex can be deployed as a standalone server, or integrated into apps like Jan.ai.

Cortex.cpp is a multi-engine that uses llama.cpp as the default engine but also supports the following:

Installation

You can install a nightly (unstable) version of Cortex from Discord here: https://discord.gg/nGp6PMrUqS

Built-in Model Library

Cortex.cpp supports various models available on the Cortex Hub. Once downloaded, all model source files will be stored in ~\cortexcpp\models.

Example models:

Model llama.cpp
:gguf
TensorRT
:tensorrt
ONNXRuntime
:onnx
Command
llama3.1 cortex run llama3.1:gguf
llama3 cortex run llama3
mistral cortex run mistral
qwen2 cortex run qwen2:7b-gguf
codestral cortex run codestral:22b-gguf
command-r cortex run command-r:35b-gguf
gemma cortex run gemma
mixtral cortex run mixtral:7x8b-gguf
openhermes-2.5 cortex run openhermes-2.5
phi3 (medium) cortex run phi3:medium
phi3 (mini) cortex run phi3:mini
tinyllama cortex run tinyllama:1b-gguf

Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 14B models, and 32 GB to run the 32B models.

Cortex.cpp CLI Commands

For complete details on CLI commands, please refer to our CLI documentation.

REST API

Cortex.cpp includes a REST API accessible at localhost:3928. For a complete list of endpoints and their usage, visit our API documentation.

Uninstallation

Windows

  1. Navigate to Add or Remove Programs.
  2. Search for Cortex.cpp and click Uninstall.

MacOs

Run the uninstaller script:

sudo sh cortex-uninstall.sh

Linux

sudo apt remove cortexcpp

Build from Source

Windows

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.bat
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DBUILD_SHARED_LIBS=OFF -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-windows-static
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h

MacOS

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get the help information
cortex -h

Linux

  1. Clone the Cortex.cpp repository here.
  2. Navigate to the engine > vcpkg folder.
  3. Configure the vpkg:
cd vcpkg
./bootstrap-vcpkg.sh
vcpkg install
  1. Build the Cortex.cpp inside the build folder:
mkdir build
cd build
cmake .. -DCMAKE_TOOLCHAIN_FILE=path_to_vcpkg_folder/vcpkg/scripts/buildsystems/vcpkg.cmake
make -j4
  1. Use Visual Studio with the C++ development kit to build the project using the files generated in the build folder.
  2. Verify that Cortex.cpp is installed correctly by getting help information.
# Get help
cortex

Contact Support