SymbioticLM-14B

Model Type: Hybrid Symbolic–Transformer with Persistent Memory
Base Model: Qwen-14B
Framework: PyTorch + HuggingFace Transformers
Purpose: Full-scale cognitive reasoning model with self-organizing memory and generative symbolic evolution


Overview

SymbioticLM-14B is a state-of-the-art 17.8 billion parameter symbolic–transformer hybrid model that tightly couples high-capacity neural representation with structured symbolic cognition. Designed to match or exceed performance of top-tier LLMs in symbolic domains, it supports persistent memory, entropic recall, multi-stage symbolic routing, and self-organizing knowledge structures.

This model is ideal for advanced reasoning agents, research assistants, and symbolic math/code generation systems.


Architecture Highlights

  • Backbone: Qwen-14B transformer with rotary embeddings + FlashAttention
  • Symbolic Dim: 8192
  • Symbolic Modules:
    • ThoughtDynamicsLNN (multi-head LSTM attention)
    • LiquidThoughtProcessor
    • CrystallineProcessor (DNAConv GNN)
    • HelicalDNAProcessor (linear helical encoding)
  • Memory: 4096 symbolic states in FP32, retrieved using entropy + contextual similarity
  • Dream Mode: Background symbolic simulation for open-ended cognition
  • Router: Intent classifier + entropy gating for processor path selection

Files Included

File Description
model.bin Transformer weights (LFS)
model.safetensors Memory-safe weights, optimized for loading
memory.pt 4096-symbolic vector bank
config.json Model and architectural metadata
generation_config.json Top-p, temperature, decoding settings
tokenizer.json Full tokenizer with symbolic tag support
added_tokens.json Tags like <D_LIM>, <PROOF>, <BY_MEASURE>, etc.
special_tokens_map.json Special token mapping for tokenizer

Intended Uses

  • Multi-step conversational agents with true memory
  • Long-form symbolic theorem generation and proof planning
  • Scientific dialogue, symbolic simulations, math/code synthesis
  • Reasoning in fuzzy, discontinuous, or non-smooth problem domains

Limitations

  • Memory requires curation and seeding for maximum utility
  • Symbolic cognition is not instruction-tuned for general QA
  • FlashAttention and symbolic modules increase VRAM usage during generation

Citations

Please cite "SymbioticLM" when using symbolic memory components in research or applications.


Convergent Intelligence Portfolio

Part of the Symbiotic AI Series by Convergent Intelligence LLC: Research Division

Mathematical Foundations: Discrepancy Calculus (DISC)

SymbioticLM's persistent memory and symbolic evolution connect to Discrepancy Calculus through self-generating completeness (Ch. 3 of the DISC monograph) and symbolic-root domains. The discrepancy operator:

Df(x)=limε01εxx+εf(t)f(x)txdtDf(x) = \lim_{\varepsilon \downarrow 0} \frac{1}{\varepsilon} \int_x^{x+\varepsilon} \frac{|f(t) - f(x)|}{|t - x|}\, dt

quantifies local mismatch between integration and differentiation. In the symbolic-transformer context, $D$ measures the gap between what the symbolic system encodes (discrete structure) and what the transformer integrates (continuous representation). The self-generating completeness theorem establishes that completeness emerges dynamically via energy computation on symbolic-root domains — the mathematical foundation for why symbolic-neural hybrids can produce structure that neither component generates alone.

The discrepancy energy $E_{\text{disc}}[f] = \frac{1}{2}\int w(x)(Df(x))^2 d\mu(x)$ provides a natural stability criterion for the memory consolidation process: memory states with bounded discrepancy energy are stable; those with divergent energy indicate structural transitions requiring reorganization.

Full theory: "On the Formal Analysis of Discrepancy Calculus" (Colca, 2026; Convergent Intelligence LLC: Research Division).

Related Models

Model Downloads Format
Symbiotic-1B 4 HF
Symbiotic-8B 4 HF
Symbiotic-Beta 3 HF

Top Models from Our Lab

Total Portfolio: 49 models, 22,598 total downloads

Last updated: 2026-03-28 12:57 UTC


From the Convergent Intelligence Portfolio

DistilQwen Collection — Our only BF16 series. Proof-weighted distillation from Qwen3-30B-A3B → 1.7B and 0.6B on H100. Three teacher variants (Instruct, Thinking, Coder), nine models, 2,788 combined downloads. The rest of the portfolio proves structure beats scale on CPU. This collection shows what happens when you give the methodology real hardware.

Top model: Qwen3-1.7B-Coder-Distilled-SFT — 508 downloads

Full methodology: Structure Over Scale (DOI: 10.57967/hf/8165)

Convergent Intelligence LLC: Research Division

Downloads last month
693
Safetensors
Model size
15B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for reaperdoesntknow/Symiotic-14B

Finetuned
Qwen/Qwen3-14B
Finetuned
(224)
this model
Quantizations
2 models

Dataset used to train reaperdoesntknow/Symiotic-14B

Collection including reaperdoesntknow/Symiotic-14B