Oussema Harbi's picture

Oussema Harbi

Harbous

AI & ML interests

None yet

Recent Activity

reacted to SeaWolf-AI's post with 🔥 about 15 hours ago
🔥 128 Blackwell GPUs — Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first — because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab — dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. 👉 https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net — our Proto-AGI architecture (Emergence Engine · Meta-Cognition · SLAI · Multi-Intelligence · Synergy & Critique) — validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 — Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski — Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift — You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs — all shared here openly. 🤗 Thank you, Hugging Face. Let's turn the next page together. 🚀 vidraft · VIDRAFT #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
reacted to kanaria007's post with 👍 2 months ago
✅ New Article: *Post-Transformer Decision Cores* (v0.1) Title: 🚀 Post-Transformer Decision Cores: Goal-Native Engines Beyond LLMs 🔗 https://huggingface.co/blog/kanaria007/post-tranformer-decision-cores --- Summary: Transformers are powerful—but in SI-Core they’re *not the essence of intelligence*. A *Decision Core* is anything that satisfies the *Jump contracts* (OBS/ETH/MEM/ID/EVAL + RML), and those contracts don’t require next-token prediction. This article sketches what “post-Transformer” looks like in practice: *goal-native, structure-aware controllers* that may use LLMs as tools—but don’t depend on them as the runtime brain. > Don’t relax the contracts. > Replace the engine behind them. --- Why It Matters: • Makes LLMs *optional*: shift them to “genesis / exploration / explanation,” while routine high-stakes Jumps run on structured cores • Improves boring-but-critical properties: *determinism (CAS), fewer inconsistencies (SCI), fewer ETH violations (EAI), better rollback (RBL/RIR)* • Enables gradual adoption via *pluggable Jump engines* and domain-by-domain “primary vs fallback” switching --- What’s Inside: • The architectural inversion: *World → OBS → SIM/SIS → Jump (Decision Core) → RML → Effects* (LLM is just one engine) • Three compatible post-Transformer directions: 1. *World-model + search controllers* (MPC/MCTS/anytime search with explicit GCS + ETH constraints) 2. *Genius-distilled specialized controllers* (distill structure from GeniusTraces; LLM becomes a “genesis tool”) 3. *SIL-compiled Decision Programs* (typed Jump entrypoints, compiler-checked invariants, DPIR/GSPU targeting) • A realistic migration path: LLM-wrapped → Genius library → shadow dual-run → flip primary by domain → SIL-compiled cores • How this connects to “reproducing genius”: GRP provides trace selection/format; this article provides the engine architectures --- 📖 Structured Intelligence Engineering Series
View all activity

Organizations

None yet