Instructions to use 0Time/INCEPT-SH with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use 0Time/INCEPT-SH with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="0Time/INCEPT-SH", filename="incept-sh.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use 0Time/INCEPT-SH with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf 0Time/INCEPT-SH # Run inference directly in the terminal: llama-cli -hf 0Time/INCEPT-SH
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf 0Time/INCEPT-SH # Run inference directly in the terminal: llama-cli -hf 0Time/INCEPT-SH
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf 0Time/INCEPT-SH # Run inference directly in the terminal: ./llama-cli -hf 0Time/INCEPT-SH
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf 0Time/INCEPT-SH # Run inference directly in the terminal: ./build/bin/llama-cli -hf 0Time/INCEPT-SH
Use Docker
docker model run hf.co/0Time/INCEPT-SH
- LM Studio
- Jan
- Ollama
How to use 0Time/INCEPT-SH with Ollama:
ollama run hf.co/0Time/INCEPT-SH
- Unsloth Studio new
How to use 0Time/INCEPT-SH with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for 0Time/INCEPT-SH to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for 0Time/INCEPT-SH to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for 0Time/INCEPT-SH to start chatting
- Pi new
How to use 0Time/INCEPT-SH with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf 0Time/INCEPT-SH
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "0Time/INCEPT-SH" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use 0Time/INCEPT-SH with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf 0Time/INCEPT-SH
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default 0Time/INCEPT-SH
Run Hermes
hermes
- Docker Model Runner
How to use 0Time/INCEPT-SH with Docker Model Runner:
docker model run hf.co/0Time/INCEPT-SH
- Lemonade
How to use 0Time/INCEPT-SH with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull 0Time/INCEPT-SH
Run and chat with the model
lemonade run user.INCEPT-SH-{{QUANT_TAG}}List all available models
lemonade list
INCEPT.sh
Offline command inference engine for Linux. Fine-tuned Qwen3.5-0.8B (GGUF Q8_0, 774MB) designed to run on low-resource and edge devices with no GPU, no API, and no internet connection required at runtime.
Benchmark: 99/100 on a structured 100-question Linux command evaluation (Ubuntu 22.04, bash, non-root).
Installation
curl -fsSL https://raw.githubusercontent.com/0-Time/INCEPT.sh/main/install.sh | bash
Supports: Debian/Ubuntu, RHEL/Fedora, CentOS, Arch, openSUSE.
Manual Model Setup
# Download model
huggingface-cli download 0Time/INCEPT-SH \
incept-sh.gguf --local-dir ./models
# Clone and install
git clone https://github.com/0-Time/INCEPT.sh
cd INCEPT.sh
pip install -e ".[cli]"
incept
Usage
# Interactive CLI
incept
# One-shot
incept -c "list all open ports"
# Minimal output (pipe-friendly)
incept -c "find large files" -m
# With model reasoning
incept --think
CLI Commands
| Command | Description |
|---|---|
/think on|off |
Toggle chain-of-thought reasoning |
/context |
Show detected system context |
/help |
List available commands |
/exit |
Exit |
Prompt Format
ChatML with a system context line:
<|im_start|>system
ubuntu 22.04 bash non-root
<|im_end|>
<|im_start|>user
{natural language query}
<|im_end|>
<|im_start|>assistant
<think>
</think>
Inference temperature: 0.0 (greedy decoding).
Training
| Parameter | Value |
|---|---|
| Base model | Qwen/Qwen3.5-0.8B |
| Training method | Supervised fine-tuning (LoRA, rank 16) |
| Training examples | 79,264 (SFT) + 11,306 (pipe refinement) |
| Learning rate | 5ร10โปโต |
| Quantization | Q8_0 (774MB) |
| Supported distros | Ubuntu, Debian, RHEL, Arch, Fedora, CentOS |
| Training hardware | Apple M4 Mac mini, 32GB unified RAM |
Safety
- Prompt injection detection (exact-phrase matching)
- Catastrophic pattern blocking (
rm -rf /, fork bombs, pipe-to-shell, etc.) - Risk classification:
SAFE/CAUTION/DANGEROUS/BLOCKED - Zero outbound traffic at runtime
Requirements
- Linux x86_64 / aarch64
- Python 3.11+
llama-serveronPATH- ~1GB RAM at runtime
Links
- GitHub: 0-Time/INCEPT.sh
- Release: v1.0.0
License
- Downloads last month
- 15
We're not able to determine the quantization variants.