secubox-openwrt/package/secubox/secubox-app-localai-wb/files/etc/config/localai-wb
CyberMind-FR e50dcf6aee feat(secubox-app-localai-wb): Add LocalAI with native build support
New package for building LocalAI from source with llama-cpp backend:

- localai-wb-ctl: On-device build management
  - check: Verify build prerequisites
  - install-deps: Install build dependencies
  - build: Compile LocalAI with llama-cpp
  - Model management, service control

- build-sdk.sh: Cross-compile script for SDK
  - Uses OpenWrt toolchain for ARM64
  - Produces optimized binary with llama-cpp

Alternative to Docker-based secubox-app-localai for native builds.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-21 19:09:39 +01:00

37 lines
1.1 KiB
Plaintext

config main 'main'
option enabled '0'
option installed '0'
option api_port '8080'
option api_host '0.0.0.0'
option data_path '/srv/localai'
option models_path '/srv/localai/models'
option threads '4'
option context_size '2048'
option debug '0'
option cors '1'
# Build settings
config build 'build'
option version 'v2.25.0'
option build_type 'generic'
option backends 'llama-cpp'
# Model presets
config preset 'tinyllama'
option name 'tinyllama'
option url 'https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf'
option size '669M'
option description 'TinyLlama 1.1B - Ultra-lightweight'
config preset 'phi2'
option name 'phi-2'
option url 'https://huggingface.co/TheBloke/phi-2-GGUF/resolve/main/phi-2.Q4_K_M.gguf'
option size '1.6G'
option description 'Microsoft Phi-2 - Compact and efficient'
config preset 'mistral'
option name 'mistral-7b'
option url 'https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf'
option size '4.1G'
option description 'Mistral 7B Instruct - High quality'