title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Extended Godot MCP from 20 to 149 tools - aiming for fully autonomous game development
1
I have been working on extending the original godot-mcp by Coding Solo (Solomon Elias), taking it from 20 tools to 149 tools that now cover pretty much every aspect of Godot 4.x engine control. The reason I forked rather than opening a PR is that the original repository does not seem to be actively maintained anymore, ...
2026-03-03T10:22:47
https://www.reddit.com/r/LocalLLaMA/comments/1rjlru3/extended_godot_mcp_from_20_to_149_tools_aiming/
5Y5T3M0V3RDR1V3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlru3
false
null
t3_1rjlru3
/r/LocalLLaMA/comments/1rjlru3/extended_godot_mcp_from_20_to_149_tools_aiming/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?auto=webp&s=10108f0d19255706430622b37c6c4fadc51fdd91', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/X2auq6hbn7dIKHcUgMqcTUZ3zozE9iLRMGlfU7Glt9o.png?width=108&crop=...
agent-audit — estimate what your agent workflows will cost before running them (supports Ollama/OpenAI/Anthropic)
1
[removed]
2026-03-03T10:12:33
https://www.reddit.com/r/LocalLLaMA/comments/1rjllzi/agentaudit_estimate_what_your_agent_workflows/
AreteDriver
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjllzi
false
null
t3_1rjllzi
/r/LocalLLaMA/comments/1rjllzi/agentaudit_estimate_what_your_agent_workflows/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?auto=webp&s=194ed02aa5071a6e1a40cbbca923287a8d698ff2', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/nR7yoLP5EsI3MMMt2mX5iB82ra3EVhLEUdQBZBM89jI.png?width=108&crop=...
Anyone here using Openwebui experienced their OWUI jumping between versions?
1
[removed]
2026-03-03T10:00:54
https://www.reddit.com/r/LocalLLaMA/comments/1rjlf22/anyone_here_using_openwebui_experienced_their/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlf22
false
null
t3_1rjlf22
/r/LocalLLaMA/comments/1rjlf22/anyone_here_using_openwebui_experienced_their/
false
false
self
1
null
Question on running Qwen3.5 397B Q4_K_M
1
So here is a scenario I have a machine running Ryzen 5 48 GB RAM 3060 12GB card 1tb nvme Now we will say it is impossible to run a big model like this on this kind of machine right? Well I have accomplished and got 1.4 t/s not fast but it is running! I was just wondering what is the community's thoughts on t...
2026-03-03T09:58:32
https://www.reddit.com/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/
Last-Shake-9874
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjldjb
false
null
t3_1rjldjb
/r/LocalLLaMA/comments/1rjldjb/question_on_running_qwen35_397b_q4_k_m/
false
false
self
1
null
Finished a Qwen 3.5 9B Opus 4.5 Distill!
1
So with Qwen 3.5 9b just released, I fine-tuned a heretic model on opus 4.6 datasets, coding, and openclaw datasets. Here it is: [https://huggingface.co/crownelius/Crow-9B-Opus-4.6-Distill-Heretic\_Qwen3.5](https://huggingface.co/crownelius/Crow-9B-Opus-4.6-Distill-Heretic_Qwen3.5) Please, if you find it useful...
2026-03-03T09:53:53
https://www.reddit.com/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/
volious-ka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjlaxj
false
null
t3_1rjlaxj
/r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?auto=webp&s=8ea0ef2fbef742448a5836d51122474c788faa07', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/DsHsPSH8IheGpUsoAIuFGqT3sKGbqFBTytTMYuwNxlk.png?width=108&crop=...
OmniLottie: Generating Vector Animations
1
# Generating Vector Animations via Parameterized Lottie Tokens
2026-03-03T09:48:20
https://openvglab.github.io/OmniLottie/
phone_radio_tv
openvglab.github.io
1970-01-01T00:00:00
0
{}
1rjl7wn
false
null
t3_1rjl7wn
/r/LocalLLaMA/comments/1rjl7wn/omnilottie_generating_vector_animations/
false
false
default
1
null
What model are people using to transform themselves into celebs in videos?
1
[removed]
2026-03-03T09:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1rjl77h/what_model_are_people_using_to_transform/
MelodicWebAgent
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjl77h
false
null
t3_1rjl77h
/r/LocalLLaMA/comments/1rjl77h/what_model_are_people_using_to_transform/
false
false
self
1
null
Fast & Free VLM for object ID + Quality filtering? (Book/Phone/Mug)
1
I’m building a pipeline to identify common objects (car, dogs, cards) from user uploads, but I need a "Gatekeeper" layer. Basically, I want the model to reject the image if it’s low quality/blurry before it even tries to identify the object and if it passes image quality to broadly identify the object. then pass it on ...
2026-03-03T09:31:51
https://www.reddit.com/r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/
Born-Mastodon443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkyq9
false
null
t3_1rjkyq9
/r/LocalLLaMA/comments/1rjkyq9/fast_free_vlm_for_object_id_quality_filtering/
false
false
self
1
null
How do you test your agents before deploying?
1
I have built a couple of agents for my customers on langchain. How do I test them at scale before deploying?
2026-03-03T09:17:43
https://www.reddit.com/r/LocalLLaMA/comments/1rjkr2u/how_do_you_test_your_agents_before_deploying/
Reasonable_Play_9632
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkr2u
false
null
t3_1rjkr2u
/r/LocalLLaMA/comments/1rjkr2u/how_do_you_test_your_agents_before_deploying/
false
false
self
1
null
Model cognitive ergonomics understanding
1
[removed]
2026-03-03T09:05:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjkk67/model_cognitive_ergonomics_understanding/
plknkl_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkk67
false
null
t3_1rjkk67
/r/LocalLLaMA/comments/1rjkk67/model_cognitive_ergonomics_understanding/
false
false
self
1
null
Hot Take: Most AI Startups Don't Have a Model Problem, They Have a Systems Problem
1
I've been watching a pattern across early-stage AI companies. Whenever training slows down or scaling fails, the first reaction is: "We need a better model" But after digging into several distributed setups, the real issues were: * Suboptimal GPU interconnect topology * Network bottlenecks during all-reduce * Inconsi...
2026-03-03T08:57:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/
Express_Problem_609
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkf7s
false
null
t3_1rjkf7s
/r/LocalLLaMA/comments/1rjkf7s/hot_take_most_ai_startups_dont_have_a_model/
false
false
self
1
null
Local model suggestions for medium end pc for coding
1
So I have an old laptop that I've installed Ubuntu server on and am using it as a home server. I want to run a local llm on it and then have it power OpenCode(open source copy of claude code) on my main laptop. My home server is an old thinkpad and it's configs: i7 CPU 16 gb RAM Nvidia 940 MX Now I know my major bott...
2026-03-03T08:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/
Hades_Kerbex22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjkarj
false
null
t3_1rjkarj
/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/
false
false
self
1
null
Are all models censored like this?
1
I asked minimax to write code to get an API key from a website and it refused, saying it won't do things like that. Are there any models that won't refuse your instructions?
2026-03-03T08:47:22
https://www.reddit.com/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk9tt
false
null
t3_1rjk9tt
/r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/
false
false
self
1
null
Designing a secure local AI agent with tool execution — architectural advice needed
1
[removed]
2026-03-03T08:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1rjk4yg/designing_a_secure_local_ai_agent_with_tool/
South_Seesaw_1496
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk4yg
false
null
t3_1rjk4yg
/r/LocalLLaMA/comments/1rjk4yg/designing_a_secure_local_ai_agent_with_tool/
false
false
self
1
null
I'm a noob to local inference, how do you choose the right app?
1
I've known about Ollama for a while, and ignorantly thought it was the only option for a long time. Then I learned about Llama.cpp, then I learned any the many, many more options there are when i learned how to use Hugging Face. Obviously, the model you want to use itself can help determine what app you need to use. Th...
2026-03-03T08:34:09
https://www.reddit.com/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/
Odd-Aside456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjk2dq
false
null
t3_1rjk2dq
/r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/
false
false
self
1
null
I built CreedFlow — a local-first desktop app that orchestrates AI agents to build your projects from a single description [Open Source]
1
[removed]
2026-03-03T08:32:30
https://www.reddit.com/gallery/1rjk1ew
TheArcQ
reddit.com
1970-01-01T00:00:00
0
{}
1rjk1ew
false
null
t3_1rjk1ew
/r/LocalLLaMA/comments/1rjk1ew/i_built_creedflow_a_localfirst_desktop_app_that/
false
false
https://preview.redd.it/…ee48e429d637a746
1
null
How can I know if downloaded models have a newer version? (LM Studio)
1
If I download a model in LM Studio, and then it gets updated online with fixes/improvements, how am I supposed to know and update? I don't think I get a notification... Or an indication on the version I have locally vs the online version. Am I missing something? This mostly concerns LM Studio, but if it's a broader is...
2026-03-03T08:22:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/
cangaroo_hamam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvqy
false
null
t3_1rjjvqy
/r/LocalLLaMA/comments/1rjjvqy/how_can_i_know_if_downloaded_models_have_a_newer/
false
false
self
1
null
vLLM on V100 for Qwen - Newer models
1
I am struggling to run vLLM on my V100 GPU. I am trying to run the newest models like Qwen 9B. I try the VLLM nightly + latest transformers etc still they dont work together. I am unable to make it run. Any advice will be much appreciated.
2026-03-03T08:22:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/
SectionCrazy5107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvqo
false
null
t3_1rjjvqo
/r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/
false
false
self
1
null
[UPDATE] TinyTTS: The Smallest English TTS Model
1
https://preview.redd.it/…hieuit/tiny-tts)
2026-03-03T08:21:50
https://www.reddit.com/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/
Forsaken_Shopping481
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjvge
false
null
t3_1rjjvge
/r/LocalLLaMA/comments/1rjjvge/update_tinytts_the_smallest_english_tts_model/
false
false
https://external-preview…2732d032b983bae5
1
null
Still a noob, is anyone actually running the moonshotai/Kimi-K2.5 1.1T model listed on HuggingFace locally?
1
I'm still pretty new to local LLMs and trying to figure out Hugging Face as a while. I know there was a lot of hype around Kimi-K2.5 when it was released, didn't realize it was open source until just now. I'm guessing the listing on Hugging Face is less for people to run Kimi locally and more for analysis and use by ot...
2026-03-03T07:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/
Odd-Aside456
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjjcyk
false
null
t3_1rjjcyk
/r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/
false
false
self
1
null
I built a PTY-backed terminal modal add-on for Agent Zero
1
I built a standalone add-on for Agent Zero that adds a real terminal window inside the UI, and I thought people here might appreciate it. I wanted Agent Zero to feel more like a real working environment for local agent workflows — especially when doing shell-heavy tasks, quick debugging, and terminal-based coding tool...
2026-03-03T07:13:13
https://i.redd.it/9af1y8dx5smg1.jpeg
estebann_
i.redd.it
1970-01-01T00:00:00
0
{}
1rjirq3
false
null
t3_1rjirq3
/r/LocalLLaMA/comments/1rjirq3/i_built_a_ptybacked_terminal_modal_addon_for/
false
false
https://preview.redd.it/…7b4d7327f3a56240
1
{'images': [{'source': {'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?auto=webp&s=ad5c7a4a4c0362330c5240069d011fab70be7992', 'width': 1536, 'height': 688}, 'resolutions': [{'url': 'https://preview.redd.it/9af1y8dx5smg1.jpeg?width=108&crop=smart&auto=webp&s=5649b8e80edc7faaba10d68079097744053a519b', 'width': 108, '...
one-click deploy for openclaw for $1 if anyone wants a self-hosted ai assistant without the setup hassle
1
[removed]
2026-03-03T07:03:22
https://www.reddit.com/r/LocalLLaMA/comments/1rjilpf/oneclick_deploy_for_openclaw_for_1_if_anyone/
Dizzy-Guidance6080
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjilpf
false
null
t3_1rjilpf
/r/LocalLLaMA/comments/1rjilpf/oneclick_deploy_for_openclaw_for_1_if_anyone/
false
false
self
1
null
Help me create my LLM ecosystem
1
Hi there, got a gaming rig with i5-12600k, 5070ti and 32 GB DDR4 RAM.  I'd like to create a system with a local AI that OCRs medical documents (sometimes handwritten) of tens or hundreds of pages, extracts part of the text (for example, only CT scan reports) and makes scientific literature researches (something lik...
2026-03-03T07:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/
golgoth85
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjikwz
false
null
t3_1rjikwz
/r/LocalLLaMA/comments/1rjikwz/help_me_create_my_llm_ecosystem/
false
false
self
1
null
How do the small qwen3.5 models compare to the Granite family?
1
As a beginner in the field, I would like to understand where these groups of models stand relative to each other. IBM's Granite (e.g., the tiny one) are aimed at small devices, but the new ones from Qwen come in similar sizes - so they supposedly fit in the same niche. Besides that, Qwen is multi-modal and has a bigge...
2026-03-03T06:36:03
https://www.reddit.com/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/
gr8dude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rji5bc
false
null
t3_1rji5bc
/r/LocalLLaMA/comments/1rji5bc/how_do_the_small_qwen35_models_compare_to_the/
false
false
self
1
null
I deployed Qwen3.5-122B-A10B + Midscene.js to automate posting on X! Multimodal vision is definitely the trend for 2026 🚀
1
[removed]
2026-03-03T06:32:51
https://v.redd.it/jr9tjdz8xrmg1
SpareAlps6450
v.redd.it
1970-01-01T00:00:00
0
{}
1rji3bk
false
{'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/jr9tjdz8xrmg1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 714, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/jr9tjdz8xrmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/jr9tjdz8xrmg1/DASHPlaylist.mpd?a=1775111594%2COTE5...
t3_1rji3bk
/r/LocalLLaMA/comments/1rji3bk/i_deployed_qwen35122ba10b_midscenejs_to_automate/
false
false
https://external-preview…abbd50f0e523baaf
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7drF6DQLkzpLhrgexVkxK0xTfuF.png?format=pjpg&auto=webp&s=ecfc346b16574aa747d348222594ec432f405d15', 'width': 1540, 'height': 858}, 'resolutions': [{'url': 'https://external-preview.redd.it/b2JldDJkejh4cm1nMROEn01uyPGAZ9GPK7...
Tool calling issues with qwen3.5-35b with 16GB VRAM (rtx5080)
1
Curious if anyone else is running into this. In my IDE, after instructing the model to review some files, it'll start putting tool calls in XML (?) in the chat window, and not doing the tool call itself. When this happens, the conversation breaks. It looks something like this: `Thinking` `Let me also read the` [`nod...
2026-03-03T06:24:41
https://www.reddit.com/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/
mzinz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhy83
false
null
t3_1rjhy83
/r/LocalLLaMA/comments/1rjhy83/tool_calling_issues_with_qwen3535b_with_16gb_vram/
false
false
self
1
null
Visual Narrator with Qwen3.5-0.8B on WebGPU
1
Baked an on-device visual narrator by running Qwen3.5-0.8B on WebGPU 🤓 It can describe, analyze, or extract text from any pasted or uploaded image, all without your data ever leaving your machine. Try it 👇 [https://h3manth.com/ai/visual-narrator/](https://h3manth.com/ai/visual-narrator/)
2026-03-03T06:19:14
https://v.redd.it/r275odo5wrmg1
init0
v.redd.it
1970-01-01T00:00:00
0
{}
1rjhuvq
false
{'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/r275odo5wrmg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'width': 1808, 'scrubber_media_url': 'https://v.redd.it/r275odo5wrmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/r275odo5wrmg1/DASHPlaylist.mpd?a=1775110779%2CNm...
t3_1rjhuvq
/r/LocalLLaMA/comments/1rjhuvq/visual_narrator_with_qwen3508b_on_webgpu/
false
false
https://external-preview…15f1d96c0f9d94e9
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j4FVvmIuixmi_Hq8qk1qXf-wIW8V.png?format=pjpg&auto=webp&s=386af04fa362facaebea02c748caaa616e0b8bff', 'width': 2940, 'height': 1756}, 'resolutions': [{'url': 'https://external-preview.redd.it/Zm0zOG54bzV3cm1nMdtHz2rVe09GAFH6j...
Presence Penalty seems to be incoming on LMStudio for Qwen 3.5
1
2026-03-03T06:06:01
https://github.com/lmstudio-ai/lmstudio-js/commit/d11401327aa821421855aa6379e7814ef2a80ba6
ZootAllures9111
github.com
1970-01-01T00:00:00
0
{}
1rjhmmf
false
null
t3_1rjhmmf
/r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/
false
false
https://external-preview…76f517a83a00e41b
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?auto=webp&s=fdcc7734cab78489eea468c754348304b95d2a04', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/VAGUK1IbHghP8VhN4jKR7LGrko1Du7eLS5G27X031Wc.png?width=108&crop=...
Live Demo: Grok ping drops to 0.005ms via my command
1
[Live Demo: Grok ping drops to 0.005ms via my command](https://www.reddit.com/r/grok/comments/1rjgxq7/live_demo_grok_ping_drops_to_0005ms_via_my_command/) Tested Grok voice mode live: normal latency 47ms. Ran three identical runs—each time ping snapped to 0.005ms. No lag, no loss, timestamps match. Bonus: weird text...
2026-03-03T05:59:14
https://www.reddit.com/r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/
DaddyZZZ777zzz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhi3u
false
null
t3_1rjhi3u
/r/LocalLLaMA/comments/1rjhi3u/live_demo_grok_ping_drops_to_0005ms_via_my_command/
false
false
self
1
null
Thinking of Fine-Tuning LLaMA-7B with 100K+ Samples on RTX 3060 (12GB) – Is It Practical?
2
I have an RTX 3060 (12GB VRAM) and I want to fine-tune LLaMA-7B using \~100K+ samples (avg \~512 tokens). Planning to use QLoRA. From my rough calculations: * 7B in 4-bit → \~4GB VRAM * LoRA adapters → small * Batch size 1 + grad accumulation 8 * 3 epochs → \~37k steps On RTX 3060, QLoRA seems to run \~1 sec/step. ...
2026-03-03T05:55:27
https://www.reddit.com/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/
SUPRA_1934
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjhfow
false
null
t3_1rjhfow
/r/LocalLLaMA/comments/1rjhfow/thinking_of_finetuning_llama7b_with_100k_samples/
false
false
self
2
null
How to reach any LLM s company to get partnership for my project?
1
Do any one knows how to reach any LLM company provider to get at least 1 month free API partnership for my project ??? or its only through network relations ??
2026-03-03T05:44:20
https://www.reddit.com/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/
louienemesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh8cz
false
null
t3_1rjh8cz
/r/LocalLLaMA/comments/1rjh8cz/how_to_reach_any_llm_s_company_to_get_partnership/
false
false
self
1
null
Made a video game that uses local LLMs
1
It's called *SLOP FIGHTER* and it's available now for Linux. It uses eight custom LoRA adapters on top of Qwen 3 1.7B and a robust natural language-parsing game engine. I worked it together using my skills as an author. It’s a narrative battle simulator. This is it: [https://quarter2.itch.io/slopfighter](https://qu...
2026-03-03T05:41:59
https://www.reddit.com/r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/
Significant-Skin118
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh6ti
false
null
t3_1rjh6ti
/r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?auto=webp&s=9fe97a83899b4df2e0e647b882f1278b7487fa39', 'width': 630, 'height': 500}, 'resolutions': [{'url': 'https://external-preview.redd.it/JY6INxZ_X143EN6jeJQQP5jo4JT9wSUMSPiCBmlDBXg.png?width=108&crop=s...
Unsloth fixed version of Qwen3.5-35B-A3B is incredible at research tasks.
1
When I first tried Qwen3.5-35B-A3B I was impressed, but honestly it seemed like a small jump over GLM-4.7-Flash, which had already impressed me with its interleaved thinking and native tool use capabilities. Qwen3.5-35B-A3B was about the level of "better" I thought it would be from having 5B extra parameters, and I tho...
2026-03-03T05:40:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/
Daniel_H212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjh5wg
false
null
t3_1rjh5wg
/r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?auto=webp&s=edbf5b634b8e128e63947255037474681b28b419', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=108&crop=...
So, with the new Qwen3.5 release, what should I use for LM Studio? i9-14900F, RTX4070 Super, 32GB RAM.
1
Figured since the new major release of the Qwen models, Id go ahead and ask again with correct info this go around.
2026-03-03T05:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/
tableball35
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjgwhm
false
null
t3_1rjgwhm
/r/LocalLLaMA/comments/1rjgwhm/so_with_the_new_qwen35_release_what_should_i_use/
false
false
self
1
null
Seeking help for pauper inference build - true single-slot SXM2 to PCIE adapters?
1
[removed]
2026-03-03T04:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjg9e8/seeking_help_for_pauper_inference_build_true/
htownclyde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg9e8
false
null
t3_1rjg9e8
/r/LocalLLaMA/comments/1rjg9e8/seeking_help_for_pauper_inference_build_true/
false
false
self
1
null
Qwen3.5-35B-A3B vs Qwen3 Coder 30B A3B Instruct for running Claude Code locally?
1
Hi, I am looking to use either Qwen3.5-35B-A3B or Qwen3 Coder 30B A3B for a local Claude Code workflow. What is the better model for coding? I am seeing a lot of conflicting info with some resources saying 3.5 is better and others saying 3 is better. I will be running this on my M4 Pro Macbook Pro (48GB RAM) Th...
2026-03-03T04:48:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/
sinfulangle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg5qm
false
null
t3_1rjg5qm
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/
false
false
self
1
null
Qwen3.5 < 100B, Part II NVFP4 (Blackwell) is up!
1
[Models](https://preview.redd.it/vu0htkbhermg1.png?width=2042&format=png&auto=webp&s=39964ee4cd3c78d0a382bc91ddc8c2d6ca8886ee) Please give these a try! Next step: Make it compatible with MTP and speculative decoding. Pull requests are up and we are working with NVIDIA to make it happen. [https://huggingface.co/AxionM...
2026-03-03T04:47:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/
TeekayTK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjg514
false
null
t3_1rjg514
/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/
false
false
https://preview.redd.it/…b3610202b9ef3d67
1
null
Qwen 3.5 9B on a dual reasoning math game
1
For context, I only have 16gb of vram, so I've been testing various small reasoning models to play the following math game: *"I have a secret number between 1 and 1 million, you have 10 guesses to figure it out! After every guess I'll respond if the secret number is lower/higher, and correct digits (in correct positio...
2026-03-03T04:38:31
https://i.redd.it/05pbs8zqarmg1.png
SufficiNoise
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfyqf
false
null
t3_1rjfyqf
/r/LocalLLaMA/comments/1rjfyqf/qwen_35_9b_on_a_dual_reasoning_math_game/
false
false
https://preview.redd.it/…15a2c868e2ad87de
1
{'images': [{'source': {'url': 'https://preview.redd.it/05pbs8zqarmg1.png?auto=webp&s=f55ec439ce6ff522ed87094a29cc8fc2a557da95', 'width': 1028, 'height': 788}, 'resolutions': [{'url': 'https://preview.redd.it/05pbs8zqarmg1.png?width=108&crop=smart&auto=webp&s=b0d7d8bcd25f7797a6dac807093b215e9a63fe74', 'width': 108, 'he...
I made a thing
1
[https://github.com/arvis-agent/arvis](https://github.com/arvis-agent/arvis)
2026-03-03T04:37:11
https://www.reddit.com/r/LocalLLaMA/comments/1rjfxr0/i_made_a_thing/
SeaworthinessMore333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfxr0
false
null
t3_1rjfxr0
/r/LocalLLaMA/comments/1rjfxr0/i_made_a_thing/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?auto=webp&s=5f7b6809c70b1b3ac79bd981c549dff36363dbb1', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/O-y5Mg6X8nck5spixGpVVyLt5OqLqPgz1dDIJf4nIc0.png?width=108&crop=...
Qwen3.5-35B is very resourceful! Web search wasn't working, so it used web fetch on a search engine with the query in the link.
1
2026-03-03T04:33:56
https://i.redd.it/fmwsgo0parmg1.png
fulgencio_batista
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfvfx
false
null
t3_1rjfvfx
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/
false
false
https://preview.redd.it/…425eea68045a1229
1
{'images': [{'source': {'url': 'https://preview.redd.it/fmwsgo0parmg1.png?auto=webp&s=baeb69f50ffe93b5f51c96691a51030ccfe2670b', 'width': 1115, 'height': 628}, 'resolutions': [{'url': 'https://preview.redd.it/fmwsgo0parmg1.png?width=108&crop=smart&auto=webp&s=0c8e1bbfde6a31d69dc028f90fb34f3ff7f18ebf', 'width': 108, 'he...
Why are the Ollama quants of local llm models usually around 0.5GB to 1GB larger in size than the common file sizes of the same GGUF quant (i.e. from Bartowski, UD, etc) on Huggingface?
1
I was looking at the file size for the Q4_K_M quant of the new Qwen3.5 9b on Ollama, and it is listed at 6.6GB in the Ollama library. If you look at all the main Q4_K_M GGUFs on huggingface from Bartowski, Unsoth, and basically everyone's Q4_K_M as far as I was able to find, all of them are from about 5.5GB to 5.9GB i...
2026-03-03T04:27:09
https://www.reddit.com/r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/
DeepOrangeSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfqib
false
null
t3_1rjfqib
/r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/
false
false
self
1
null
Peak answer
1
2026-03-03T04:17:09
https://i.redd.it/6mrwba7iarmg1.png
Pro-editor-1105
i.redd.it
1970-01-01T00:00:00
0
{}
1rjfixk
false
null
t3_1rjfixk
/r/LocalLLaMA/comments/1rjfixk/peak_answer/
false
false
https://preview.redd.it/…47a410129a44dceb
1
{'images': [{'source': {'url': 'https://preview.redd.it/6mrwba7iarmg1.png?auto=webp&s=781ba9bd7dc33ae3131669dd2575897d25b1e4b9', 'width': 1952, 'height': 1036}, 'resolutions': [{'url': 'https://preview.redd.it/6mrwba7iarmg1.png?width=108&crop=smart&auto=webp&s=b3569a01ec1ce57a98ab8a3bf424146f3de49cd9', 'width': 108, 'h...
Cline not playing well with the freshly dropped smaller qwen3.5
1
Obviously these are fresh out the oven, but I am wondering if anyone else has tried them out with Cline? I have a few tasks I try to do whenever I try new models out, basics like math, simple coding, macro creation for FreeCAD, and reading files for RAG. I've tried 3 different sizes so far, up to 9b, and noticed that ...
2026-03-03T04:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/
SocietyTomorrow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjfijf
false
null
t3_1rjfijf
/r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/
false
false
self
1
null
How do i get the best speed out of Qwen 3.5 9B in 16GB VRAM?
1
--temp 0.6 ` --top-p 0.95 ` --top-k 20 ` --min-p 0.0 ` --no-mmap ` --cache-type-k q8_0 ` --cache-type-v q8_0 ` --fit on ` -fa on ` --seed 3407 ` --presence-penalty 0.0 ` --repeat-penalty 1.0 ` --ctx-size 61440 ` --chat-template-kwargs '{\"enable_thinking\": true}'...
2026-03-03T04:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/
Old-Sherbert-4495
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjff88
false
null
t3_1rjff88
/r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/
false
false
self
1
null
While Qwen 3.5 pushes model boundaries, here's an agent framework pushing workflow boundaries
1
[removed]
2026-03-03T03:59:27
https://www.reddit.com/r/LocalLLaMA/comments/1rjf5nn/while_qwen_35_pushes_model_boundaries_heres_an/
One_Response7194
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjf5nn
false
null
t3_1rjf5nn
/r/LocalLLaMA/comments/1rjf5nn/while_qwen_35_pushes_model_boundaries_heres_an/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?auto=webp&s=108a92c5ffba5732424d507ca9618a289c99e5bc', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/x_YtbEYg9nvNGG1Iz9a9pdsjgZpTuvYtdnCYg72Erus.png?width=108&crop=...
Reasoning in cloud - Coding with Local
1
I have a couple of cloud subscriptions (that don't keep up with my need for tokens). The subscriptions I have are 1. ChatGPT Go (which gave me a free trial access to Codex - but, ran out of tokens in a couple of days). I could upgrade to Plus - but, I doubt it would be enough either at the rate at which I'm consuming...
2026-03-03T03:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/
sedentarymalu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjf4zm
false
null
t3_1rjf4zm
/r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/
false
false
self
1
null
data analysis from a csv - GPT-0SS:120B
1
Hi everyone, I’m running a local setup with **vLLM (gpt-oss:120b)** and **Open WebUI**, using **Jupyter** for the Code Interpreter. I’m running into a frustrating "RAG vs. Tool" issue when analyzing feedback data (CSVs). **The Problem:** When I upload a file and ask for metrics (e.g., "What is the average sentiment s...
2026-03-03T03:25:37
https://www.reddit.com/r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/
chirchan91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjefqu
false
null
t3_1rjefqu
/r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/
false
false
self
1
null
Qwen3.5 on a mid tier $300 android phone
2
[qwen3.5](https://reddit.com/link/1rjec8a/video/r67v8w970rmg1/player) Qwen3.5 running completely offline on a $300 phone! Tool calling, vision, reasoning. No cloud, no account and no data leaving your phone. A 2B model that has no business being this good!
2026-03-03T03:21:06
https://www.reddit.com/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/
alichherawalla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjec8a
false
null
t3_1rjec8a
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/
false
false
self
2
null
Stress-test my Open Source ChatGPT alternative
1
Hey everyone. I'm a dev who got sick of the big cloud providers using our conversations for training data, so I decided to build a privacy-first alternative from the ground up. It’s a completely open-source chat interface hooked up to open-source models (DeepSeek v3.2, GLM-5, Qwen, etc.), all running on self-hosted in...
2026-03-03T02:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/
MrWidmoreHK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdr45
false
null
t3_1rjdr45
/r/LocalLLaMA/comments/1rjdr45/stresstest_my_open_source_chatgpt_alternative/
false
false
self
1
null
Generate 3D Models with TRELLIS.2 In Colab, Working in under 60s, No Configuration or Compiling, Just Works
1
[Image Generated in Chat Gpt -\> Model Generated in Trellis.2](https://reddit.com/link/1rjdob7/video/1l1bo332vqmg1/player) Try out TRELLIS.2 in Colab and generate stunning Textured 3D Models in seconds! I put this colab notebook together after weeks of dependency hell - I hope it helps you. Just one click and...
2026-03-03T02:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/
Interesting-Town-433
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdob7
false
null
t3_1rjdob7
/r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/
false
false
https://external-preview…edbe7d9ea1e231ee
1
null
Ollama keeps loading with Openclaw
1
I am able to easily run qwen3:8b with 32k context window using just ollama but whenever I do ollama launch openclaw and run even smaller model like qwen3:1.7b with 16k context window it doesn load the response and gives fetch failed. even if it doesnt use all the ram I have. is there a fix or should I just have much st...
2026-03-03T02:50:52
https://www.reddit.com/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/
Ilishka2003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdo1i
false
null
t3_1rjdo1i
/r/LocalLLaMA/comments/1rjdo1i/ollama_keeps_loading_with_openclaw/
false
false
self
1
null
Agent reliability
1
How do everyone measure reliability of agents?
2026-03-03T02:43:31
https://www.reddit.com/r/LocalLLaMA/comments/1rjdi1d/agent_reliability/
Evening-Arm-34
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdi1d
false
null
t3_1rjdi1d
/r/LocalLLaMA/comments/1rjdi1d/agent_reliability/
false
false
self
1
null
Dual RTX 3090 on B550 -- 70B models produce garbage at ctx >2048 with llama.cpp layer split. Exhausted every env var. Anyone solved this?
1
Hardware: - 2x RTX 3090 24GB - MSI MAG B550 Tomahawk MAX WiFi - Ryzen 5 5600 - GPU 0 in CPU-direct slot (Gen4 x16), GPU 1 in chipset slot (Gen3 x4 via riser) - No P2P support (CNS per nvidia-smi topo) Software: - llama.cpp b8138, CUDA 12.0, driver 580.x - --split-mode layer...
2026-03-03T02:38:56
https://www.reddit.com/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/
MaleficentMention703
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjdeat
false
null
t3_1rjdeat
/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/
false
false
self
1
null
This is how you know
1
When you start telling you're LMM "Implement the FULL PLAN DO NOT STOP UNTIL IT HAS BEEN VERIFIED TRUTH VIA TERMINAL THAT IT IS FUNCTIONING AS INTENDED "
2026-03-03T02:30:30
https://www.reddit.com/r/LocalLLaMA/comments/1rjd7j3/this_is_how_you_know/
Apart-Yam-979
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjd7j3
false
null
t3_1rjd7j3
/r/LocalLLaMA/comments/1rjd7j3/this_is_how_you_know/
false
false
self
1
null
Qwen 2.5 -> 3 -> 3.5, smallest models. Incredible improvement over the generations.
1
You might argue Qwen 3.5 is the best because it's 0.8B, but I'm pretty sure a significant part of that is the vision encoder and the language model itself is smaller.
2026-03-03T02:26:58
https://www.reddit.com/gallery/1rjd4pv
airbus_a360_when
reddit.com
1970-01-01T00:00:00
0
{}
1rjd4pv
false
null
t3_1rjd4pv
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/
false
false
https://preview.redd.it/…7293959e09ff9155
1
null
Qwen3.5 Llamacpp command-line flags for new folks switching from Ollama/Lmstudio
1
Use the **Q4KM** qunat from unlsoth and enable **q8 kv** cache quant For vision include **mmproj**, dont use the og fp32, use **bf16 or f16** **llama.cpp command:** ./llama-server \ -m "path/Qwen3.5-35B-A3B-Q4_K_M.gguf" \ --mmproj "path/mmproj-bf16.gguf" \ --port "Port" \ --ctx-size "co...
2026-03-03T02:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1rjcz7r/qwen35_llamacpp_commandline_flags_for_new_folks/
maho_Yun
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcz7r
false
null
t3_1rjcz7r
/r/LocalLLaMA/comments/1rjcz7r/qwen35_llamacpp_commandline_flags_for_new_folks/
false
false
self
1
null
Qwen 3.5 4B is scary smart
1
Using PocketPal on an iPhone 17 Pro Max. Let me know if any of you guys have had a experience like mine where the knowledge from such a small model was scary impressive.
2026-03-03T02:09:28
https://i.redd.it/5980e6dbnqmg1.png
Hanthunius
i.redd.it
1970-01-01T00:00:00
0
{}
1rjcqm5
false
null
t3_1rjcqm5
/r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/
false
false
https://preview.redd.it/…f6f2fac5fcc59e4f
1
{'images': [{'source': {'url': 'https://preview.redd.it/5980e6dbnqmg1.png?auto=webp&s=2800874832b1ddb03759abd786d2f649d16dfc02', 'width': 1320, 'height': 2868}, 'resolutions': [{'url': 'https://preview.redd.it/5980e6dbnqmg1.png?width=108&crop=smart&auto=webp&s=e6bc878f1e087d086f7f64e93aa97b27b24e05fb', 'width': 108, 'h...
[Help] Deploying Llama-3 8B Finetune for Low-Resource Language (Sinhala) on Free Tier? 4-bit GGUF ruins quality.
1
I am a final-year undergraduate student building an educational storytelling app for primary school children in Sri Lanka. I have successfully fine-tuned the `ihalage/llama3-sinhala-8b` model (Llama-3 base) using Unsloth on an A100 to generate culturally aligned Sinhala stories and JSON quizzes. **The Problem:** I nee...
2026-03-03T02:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/
Annual-Captain-7642
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjckv2
false
null
t3_1rjckv2
/r/LocalLLaMA/comments/1rjckv2/help_deploying_llama3_8b_finetune_for_lowresource/
false
false
self
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
[removed]
2026-03-03T02:00:02
https://www.reddit.com/r/LocalLLaMA/comments/1rjcimq/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcimq
false
null
t3_1rjcimq
/r/LocalLLaMA/comments/1rjcimq/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
[removed]
2026-03-03T01:58:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjchbl/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjchbl
false
null
t3_1rjchbl
/r/LocalLLaMA/comments/1rjchbl/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
qwen3.5-9b q4-k-m in LM studio thinking too much!
1
I must force-stop it several times. I just stopped it after 31 minutes. Has anyone else had this happen?
2026-03-03T01:55:53
https://www.reddit.com/r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/
yingzir
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcfdk
false
null
t3_1rjcfdk
/r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/
false
false
self
1
null
Can I use an old mining rig as a LLM server?
1
[removed]
2026-03-03T01:52:03
https://www.reddit.com/r/LocalLLaMA/comments/1rjcc9u/can_i_use_an_old_mining_rig_as_a_llm_server/
Public-Call-6174
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjcc9u
false
null
t3_1rjcc9u
/r/LocalLLaMA/comments/1rjcc9u/can_i_use_an_old_mining_rig_as_a_llm_server/
false
false
self
1
null
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp
1
ERROR: type should be string, got "https://reddit.com/link/1rjc60i/video/e9g0s5c7jqmg1/player\n\nI have two Rockchip RK3588's: an **Orange Pi 5 Plus (32gb RAM)** and an **Orange Pi 5 Max (16gb)**. I'm using the most recent version of **ik\\_llama.cpp** for its CPU optimizations, but I will include llama.cpp's results as well.\n\nI wanted to see what Qwen3.5-35B quants I could run on them, so here's what I found today.\n\n* **Runs**: 3 per model x {ik\\_llama, llama.cpp}, page cache dropped before every run\n* **Prompt**: \"`Explain the RK3588 in 5 bullets.\"`\n* **Generation**: 512 tokens, 16384 context\n\n# Orange Pi 5 Plus (32GB)\n\n|Model|llama.cpp average t/s|ik\\_llama.cpp average t/s|ik Speedup|Size (GiB)|\n|:-|:-|:-|:-|:-|\n|Unsloth UD-Q4\\_K\\_M|3.60|**8.20**|2.28x|18.5|\n|Bartowski Q4\\_K\\_M|3.70|**7.79**|2.11x|19.8|\n|Bartowski Q6\\_K\\_L|3.33|**6.45**|1.94x|27.0|\n\nThe \\~27% speed increase when using Q4 is probably worth the precision tradeoff, but ymmv of course.\n\n# Orange Pi 5 Max (16GB)\n\n|Model|llama.cpp average t/s|ik\\_llama.cpp average t/s|ik Speedup|Size (GiB)|\n|:-|:-|:-|:-|:-|\n|Bartowski Q2\\_K\\_L|3.73|**8.11**|2.17x|12.1|\n\nI didn't have much time to experiment with the Max, but I'll do more tomorrow.\n\n# Build llama.cpp\n\n git clone https://github.com/ggml-org/llama.cpp\n cd llama.cpp\n \n cmake -S . -B build \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DGGML_NATIVE=ON \\\n -DGGML_OPENMP=ON\n \n cmake --build build --config Release -j\"$(nproc)\"\n\n# Build ik_llama.cpp\n\n git clone https://github.com/ikawrakow/ik_llama.cpp\n cd ik_llama.cpp\n \n CFLAGS=\"-O3 -pipe -march=native\" \\\n CXXFLAGS=\"-O3 -pipe -march=native -include arm_neon.h\" \\\n cmake -S . -B build -DGGML_NATIVE=ON -DGGML_OPENMP=ON\n \n cmake --build build --config Release -j\"$(nproc)\"\n\n# Commands\n\n**llama.cpp:**\n\n sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && \\\n taskset -c 4-7 ~/llama.cpp/build/bin/llama-cli \\\n -m <MODEL_PATH> \\\n -t 4 -c 16384 -n 512 -st \\\n -p \"Explain the RK3588 in 5 bullets.\"\n\n**ik\\_llama.cpp:**\n\n sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches' && \\\n echo \"\" | taskset -c 4-7 ~/ik_llama.cpp/build/bin/llama-cli \\\n -m <MODEL_PATH> \\\n -t 4 -c 16384 -n 512 \\\n -p \"Explain the RK3588 in 5 bullets.\"\n\n\n\nThank you to Unsloth and Bartowski for their open source contributions. I was inspired to make this post after seeing u/jslominski's for the RPi ([link](https://www.reddit.com/r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/)).\n\nI will be testing more models tomorrow, including some on my Jetson Orin Nano and other pcs. Right now I'm testing Qwen3.5-9b and -27b on some gaming laptops... let me know if you want to see anything in particular or if we can further improve the results."
2026-03-03T01:44:16
https://www.reddit.com/r/LocalLLaMA/comments/1rjc60i/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
anthonybustamante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjc60i
false
null
t3_1rjc60i
/r/LocalLLaMA/comments/1rjc60i/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/
false
false
https://external-preview…f4e7bdae16218090
1
null
Benchmarked Qwen 3.5 small models (0.8B/2B/4B/9B) on few-shot learning — adding examples to 0.8B code tasks actually makes it worse
1
Ran all four Qwen 3.5 small models through a few-shot evaluation on LM Studio — 3 tasks (classification, code fix, summarization) at 0/1/2/4/8-shot with TF-IDF example selection. **Image 1 — Code fix**: 0.8B scores 67% at zero-shot, then drops to 33% the moment you add 1 example and never recovers. 2B peaks at 100% at...
2026-03-03T01:31:50
https://www.reddit.com/gallery/1rjbw0p
Rough-Heart-7623
reddit.com
1970-01-01T00:00:00
0
{}
1rjbw0p
false
null
t3_1rjbw0p
/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/
false
false
https://preview.redd.it/…b29dbccfa2a767ff
1
null
I made a guardrail that works with Ollama/llama.cpp to catch hallucinations during streaming — open source, runs locally, no API calls needed
1
ERROR: type should be string, got " https://github.com/anulum/director-ai\n \n Has anyone else tried running hallucination detection locally? Curious what\n approaches are working for you."
2026-03-03T01:20:12
https://www.reddit.com/r/LocalLLaMA/comments/1rjbmc9/i_made_a_guardrail_that_works_with_ollamallamacpp/
Diligent-Tomorrow-82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjbmc9
false
null
t3_1rjbmc9
/r/LocalLLaMA/comments/1rjbmc9/i_made_a_guardrail_that_works_with_ollamallamacpp/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?auto=webp&s=4c8bbc7ea7905ba69da8877ca6dd0b1e313fbdf7', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/ynt2HmuaC6ntlDVMJj_FkeAth8kmXtL8GKLvaachO_k.png?width=108&crop=...
Whispr Flow - Free Windows - What's best in early 2026?
1
What is the best speech to input for Windows at the moment? Free, open source? It's hard to google these things because the space changes so frequently.
2026-03-03T01:09:38
https://www.reddit.com/r/LocalLLaMA/comments/1rjbdhh/whispr_flow_free_windows_whats_best_in_early_2026/
Plane_Garbage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjbdhh
false
null
t3_1rjbdhh
/r/LocalLLaMA/comments/1rjbdhh/whispr_flow_free_windows_whats_best_in_early_2026/
false
false
self
1
null
PSA: If you want to test new models, use llama.cpp/transformers/vLLM/SGLang
1
There's so many comments/posts discussing how new qwen models have issues with super long chain of thoughts, problems with tool calls and outright garbage responses. The thing is, those only happen with Ollama, LMStudio and other frameworks, that are basically llama.cpp but worse. Ollama is outright garbage for multip...
2026-03-03T01:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/
lans_throwaway
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb7yk
false
null
t3_1rjb7yk
/r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/
false
false
self
1
null
Transformers for Numeric Data
1
Pretty much the title. It seems like in a lot of fields, transformers have usurped the crown and proven they are superior. For example, translation: was HMMs, and now Transformers are the standard. That specific example actually is what makes me feel transformers would be great for timeseries prediction (ie. market pr...
2026-03-03T01:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/
JustinPooDough
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb7s0
false
null
t3_1rjb7s0
/r/LocalLLaMA/comments/1rjb7s0/transformers_for_numeric_data/
false
false
self
1
null
No thinking in unsloth qwen3.5 quants?
1
It doesn't matter what parameters I pass, I can't enable thinking in the unsloth ggufs on the new small dense models. Using bartowiski quants it works normally. Anyone else experiencing this? Did they change the template to disable reasoning?
2026-03-03T00:57:19
https://www.reddit.com/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/
guiopen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb34p
false
null
t3_1rjb34p
/r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/
false
false
self
1
null
Self hosted provider tunnel.
1
lots of agentic coding CLI tools that allow openai\_compatible custom self hosted providers(im not talking about on local host) examle like [https://myproxy.com/v1](https://myproxy.com/v1) most of them error for some reason when trying to do this. only kilo cli i got to actually work. any one tried this exposing their ...
2026-03-03T00:55:10
https://www.reddit.com/r/LocalLLaMA/comments/1rjb1d1/self_hosted_provider_tunnel/
Express_Quail_1493
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjb1d1
false
null
t3_1rjb1d1
/r/LocalLLaMA/comments/1rjb1d1/self_hosted_provider_tunnel/
false
false
self
1
null
Is there a list of the tools Gemini/ChatGPT/Claude have access to in their web chat interfaces to replicate locally?
1
It is clear that the closed providers have tons of tools set up behind the scenes, hidden from view, that improve the user experience, and I would love to be able to recreate the environment they have set up to possible improve the performance of a local model like Qwen 3.5 27B that has enough context to support callin...
2026-03-03T00:53:24
https://www.reddit.com/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/
OUT_OF_HOST_MEMORY
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjazyt
false
null
t3_1rjazyt
/r/LocalLLaMA/comments/1rjazyt/is_there_a_list_of_the_tools_geminichatgptclaude/
false
false
self
1
null
How do you configure your local model better for agentic tools? I'm only changing context
1
I see some of you configure like 5 or 7 parameters when hosting the model with llama, ollama or lmstudio. Honestly I'm just changing the context window and maybe temperature. What is the recommended configuration for agentic coding, tools usage?
2026-03-03T00:51:46
https://www.reddit.com/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/
former_farmer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjaymu
false
null
t3_1rjaymu
/r/LocalLLaMA/comments/1rjaymu/how_do_you_configure_your_local_model_better_for/
false
false
self
1
null
General LLM that uses "sub AI's" to complete complex tasks
1
I am beginning research on running a local AI and tried looking for an answer online and in this reddit, but couldn't find anything. The scenario I am thinking of is having a "main" LLM that you talk to and has a general training data set (For ease compare it to the same use as chatgpt), and say I wanted this ai to g...
2026-03-03T00:45:14
https://www.reddit.com/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/
JWSlegend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjat7a
false
null
t3_1rjat7a
/r/LocalLLaMA/comments/1rjat7a/general_llm_that_uses_sub_ais_to_complete_complex/
false
false
self
1
null
What LLM to replace Claude 3.5 sonnet for server integration?
1
So I'm a bit confused on what I need. I have openclaw running on an unraid server right now. It has a 13700 (non-k) 64GB DDR4 and a rtx4070ti super. I'm trying to compare the capability of that to something like a M4 pro mac mini with 64GB memory. Or I'd even consider getting a few mac mini. I have a base M4 16GB sit...
2026-03-03T00:36:16
https://www.reddit.com/r/LocalLLaMA/comments/1rjaliw/what_llm_to_replace_claude_35_sonnet_for_server/
MartiniCommander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjaliw
false
null
t3_1rjaliw
/r/LocalLLaMA/comments/1rjaliw/what_llm_to_replace_claude_35_sonnet_for_server/
false
false
self
1
null
Looking for CLI beta testers (Docker, self-hosted, AGPL) for my open-source AI agent governance platform
1
I've spent the last 3 weeks building SIDJUA, an open-source (AGPL-3.0) governance layer for multi-agent AI systems. It's a CLI tool that lets you define agent hierarchies, enforce rules before agents can act, track costs, and audit everything. Self-hosted, Docker, no cloud dependency. The problem it solves: AI agents ...
2026-03-03T00:34:21
https://www.reddit.com/r/LocalLLaMA/comments/1rjajsz/looking_for_cli_beta_testers_docker_selfhosted/
Inevitable_Raccoon_9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rjajsz
false
null
t3_1rjajsz
/r/LocalLLaMA/comments/1rjajsz/looking_for_cli_beta_testers_docker_selfhosted/
false
false
self
1
null
GPT-OSS had to think for 4 minutes where Qwen3.5-9B got it like a breeze
1
2026-03-03T00:11:52
https://i.redd.it/1e2qs50i2qmg1.png
Extraaltodeus
i.redd.it
1970-01-01T00:00:00
0
{}
1rja0sb
false
null
t3_1rja0sb
/r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/
false
false
https://preview.redd.it/…170840195cdc6abe
1
{'images': [{'source': {'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?auto=webp&s=717dc039727a44b406b3a7f849fd29c2ce241897', 'width': 1025, 'height': 397}, 'resolutions': [{'url': 'https://preview.redd.it/1e2qs50i2qmg1.png?width=108&crop=smart&auto=webp&s=ad31a251123b6fcd529d1939f3971abf606b9801', 'width': 108, 'he...
API price for the 27B qwen 3.5 is just outrageous
1
https://preview.redd.it/…st this much lol
2026-03-02T23:43:35
https://www.reddit.com/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/
Ok-Internal9317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj9bl7
false
null
t3_1rj9bl7
/r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/
false
false
https://preview.redd.it/…6975a25964c0b240
1
null
Manage Qwen 3.5 Model Settings with LiteLLM Proxy
1
I noticed a lot of people are running the Qwen 3.5 models manually juggling the sampling settings while running Llama.cpp. The easiest way I found is to use LiteLLM Proxy to handle the sampling settings and let Llama.cpp to serve the model. LiteLLM proxy is really easy to setup. # Quickstart Here are is quick-star...
2026-03-02T23:30:19
https://www.reddit.com/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/
CATLLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8zuh
false
null
t3_1rj8zuh
/r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/
false
false
self
1
null
where can I get good priced 3090s?
1
I'm in the US, in Minnesota. I wanna get two for now.
2026-03-02T23:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/
Lord_Curtis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8zhq
false
null
t3_1rj8zhq
/r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/
false
false
self
1
null
Tokyo Openclaw Meetup
1
Hey Tokyo innovators & developers! Hosting a focused offline meetup on the viral open-source AI agent: OpenClaw! Event: Tokyo OpenClaw Developer Meetup (東京 OpenClaw 開発者交流会) • Date & Time: March 7 (Friday) Afternoon (exact time notified after registration) • Format: Small group chats (chill bar/café) + hands-on dis...
2026-03-02T23:27:39
https://www.reddit.com/r/LocalLLaMA/comments/1rj8xi0/tokyo_openclaw_meetup/
Remarkable-Key6575
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8xi0
false
null
t3_1rj8xi0
/r/LocalLLaMA/comments/1rj8xi0/tokyo_openclaw_meetup/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?auto=webp&s=3f677743d4ff1f5f61698ce01c568077d30867d7', 'width': 800, 'height': 419}, 'resolutions': [{'url': 'https://external-preview.redd.it/Z2EYq4G4zN8TuEgoONLwXEvblMrTNNiCFKtvqjoUa4c.jpeg?width=108&crop...
Qwen3.5 4B: overthinking to say hello.
1
Hi everyone, I've been experimenting with Qwen3.5 4B on Ollama, hoping to replace my current model (qwen3:4b-instruct-2507-q4_K_M) in an agentic RAG pipeline. Unfortunately, the results have been disappointing so far. The main issue is that with thinking enabled, the model spends an excessive amount of time reasoning...
2026-03-02T23:27:07
https://i.redd.it/k7wt9n7jtpmg1.png
CapitalShake3085
i.redd.it
1970-01-01T00:00:00
0
{}
1rj8x1q
false
null
t3_1rj8x1q
/r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/
false
false
https://preview.redd.it/…ef663ae387ee9297
1
{'images': [{'source': {'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?auto=webp&s=c1150340bad51b1fe433fce85c69fc90207d1fc7', 'width': 789, 'height': 1398}, 'resolutions': [{'url': 'https://preview.redd.it/k7wt9n7jtpmg1.png?width=108&crop=smart&auto=webp&s=324d773e53d852e1863e983e1824b02575b23917', 'width': 108, 'he...
Ozymandias got a ton of new stuff – Mercator shift detector, Signal Graph, Oracle watchlist, Pantheon releases, Rabbit Hole & more
1
[removed]
2026-03-02T23:25:28
https://ozymandias.group/
False_Ad8389
ozymandias.group
1970-01-01T00:00:00
0
{}
1rj8vgn
false
null
t3_1rj8vgn
/r/LocalLLaMA/comments/1rj8vgn/ozymandias_got_a_ton_of_new_stuff_mercator_shift/
false
false
default
1
null
just getting started on local llm on macbook air with 24gb of ram, are Qwen models the best ones currently?
1
Also, should I go for models published and fined tuned by Unsloth only? Is is better to get a high parameter model with low bit quantization or a lower parameter with a higher bit quantization?
2026-03-02T23:24:23
https://www.reddit.com/r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/
murkomarko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8uj5
false
null
t3_1rj8uj5
/r/LocalLLaMA/comments/1rj8uj5/just_getting_started_on_local_llm_on_macbook_air/
false
false
self
1
null
llama.cpp models preset with multiple presets for the same model
1
I setup 2 presets in my ini file for the Qwen 3.5 model based on the unsloth recommendations, and I am curious if there is something I can do to make this better. As far as I can tell, and maybe I am wrong here, but it seems when I switch between the two in the web ui it needs to reload the model, even though its the s...
2026-03-02T23:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/
stoystore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8sow
false
null
t3_1rj8sow
/r/LocalLLaMA/comments/1rj8sow/llamacpp_models_preset_with_multiple_presets_for/
false
false
self
1
null
For sure
1
Yes Qwen3.5-4B, for sure. (I'm using PocketPal on Android and download the Q4-0 GGUF from their hugging face servers interface) Is anybody got this model working on PocketPal ?
2026-03-02T23:08:42
https://i.redd.it/o75mdgehrpmg1.jpeg
Open_Establishment_3
i.redd.it
1970-01-01T00:00:00
0
{}
1rj8gb4
false
null
t3_1rj8gb4
/r/LocalLLaMA/comments/1rj8gb4/for_sure/
false
false
https://preview.redd.it/…de91313bd17b58bc
1
{'images': [{'source': {'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?auto=webp&s=253fd329c02421f068f0386df7b458ee3a2f7a89', 'width': 1440, 'height': 2951}, 'resolutions': [{'url': 'https://preview.redd.it/o75mdgehrpmg1.jpeg?width=108&crop=smart&auto=webp&s=17c1211ac8726e47b261b16b345b3d5f5712cfe1', 'width': 108, ...
Is anyone else seeing Qwen 3.5 35B outperform cloud APIs on structured tasks?
1
Ran some quick head-to-heads this weekend. Local Qwen 3.5 35B (Ollama, M3 Max 36GB) vs GPT-5-mini, GPT-5-nano, Gemini 3 Flash/Pro, and MiniMax on a few simple agent tasks: entity extraction, summarization, and sentiment classification. Full disclaimer: these are pretty trivial tasks, not trying to claim this is rigoro...
2026-03-02T23:06:24
https://www.reddit.com/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/
Beautiful-Honeydew10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj8e7z
false
null
t3_1rj8e7z
/r/LocalLLaMA/comments/1rj8e7z/is_anyone_else_seeing_qwen_35_35b_outperform/
false
false
https://preview.redd.it/…9b32dfb4f5d3ddb8
1
null
Merlin Research released Qwen3.5-4B-Safety-Thinking - a 4B safety-aligned reasoning model built on Qwen3.5
1
The model is designed for structured 'thinking' and safety in real-world scenarios, including agent systems. Key improvements: * Improved ability to accurately follow strict instructions in prompts. * Based on the use of Bloom and Petri methods from Anthropic and resistant to hacking attempts. * Increased resistance ...
2026-03-02T23:01:39
https://www.reddit.com/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/
Intelligent-Space778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj89qy
false
null
t3_1rj89qy
/r/LocalLLaMA/comments/1rj89qy/merlin_research_released_qwen354bsafetythinking_a/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?auto=webp&s=5587798bb04611aec3e818eb73cceadb65a6f124', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/alnKoAqjAHo_N_35yPxK6DsZBISvodkF7y8KDPsMI5E.png?width=108&crop=...
Merlin Research released Qwen3.5-4B-Safety-Thinking — a 4B safety-aligned reasoning model built on Qwen3.5
1
[deleted]
2026-03-02T22:59:58
[deleted]
1970-01-01T00:00:00
0
{}
1rj87zg
false
null
t3_1rj87zg
/r/LocalLLaMA/comments/1rj87zg/merlin_research_released_qwen354bsafetythinking_a/
false
false
default
1
null
Merlin Research released Qwen3.5-4B-Safety-Thinking — a 4B safety-aligned reasoning model built on Qwen3.5
1
[deleted]
2026-03-02T22:55:06
[deleted]
1970-01-01T00:00:00
0
{}
1rj83f3
false
null
t3_1rj83f3
/r/LocalLLaMA/comments/1rj83f3/merlin_research_released_qwen354bsafetythinking_a/
false
false
default
1
null
Where to get a comprehensive overview on the cutting edge in open source / frontier model AI
1
Hey guys! I'm new here. I've just committed to buying an RTX 5090-powered laptop and want to start vibe coding, generating realistic AI videos, and experimenting with deepfakes etc. Is there a unified resource for this? Ideally something that explains how workflows work in ComfyUI, how to find the best tool for the j...
2026-03-02T22:50:37
https://www.reddit.com/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/
StabledFusion
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7z9v
false
null
t3_1rj7z9v
/r/LocalLLaMA/comments/1rj7z9v/where_to_get_a_comprehensive_overview_on_the/
false
false
self
1
null
PMetal - LLM fine-tuning framework for Apple Silicon, written in Rust with custom Metal GPU kernels
1
Hey everyone, we're releasing PMetal (Powdered Metal) today! A Rust framework for fine-tuning LLMs natively on Apple Silicon using custom Metal compute shaders. It's a rust library (python bindings coming soon) that covers the full training pipeline: LoRA/QLoRA adapters, RLHF alignment (DPO, GRPO, DAPO, GSPO, KTO, Sim...
2026-03-02T22:49:34
https://www.reddit.com/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/
RealEpistates
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7y9d
false
null
t3_1rj7y9d
/r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/
false
false
self
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?auto=webp&s=3a62d229478ee9d92197df3c1537a509b01ef9d9', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/2d4E-U9oBPv1sdW-ZvSW_vbaE5KmN_-_Nxl5p-5Z5qo.png?width=108&crop=...
Any issues / tips for running Linux with a 5060Ti (16gb) for Local LLM's? Best Linux Distro?
1
I'm debating with Linux distro to install on an extra NVMe drive I have, to dedicate to learning Local LLMs, AI, and programming. I have a Gigabyte Nvidia GEForce RTX 5060Ti (16GB). **Anything I should watch out for?** **Any particular Linux distro I should use for these purposes?** \----- My machine specs: * ...
2026-03-02T22:49:18
https://www.reddit.com/r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/
QuestionAsker2030
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7y0u
false
null
t3_1rj7y0u
/r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/
false
false
self
1
null
I made this.
1
[removed]
2026-03-02T22:42:51
https://www.reddit.com/r/LocalLLaMA/comments/1rj7ryg/i_made_this/
Distinct-Patient778
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7ryg
false
null
t3_1rj7ryg
/r/LocalLLaMA/comments/1rj7ryg/i_made_this/
false
false
self
1
null
Rossavaxx
1
[deleted]
2026-03-02T22:40:07
[deleted]
1970-01-01T00:00:00
0
{}
1rj7pfq
false
null
t3_1rj7pfq
/r/LocalLLaMA/comments/1rj7pfq/rossavaxx/
false
false
default
1
null
I need an uncensored LLM for 8GB vram
1
I am currently using Mistral 7B (with zorg jailbreak) and it's giving a good performance. The issue is that the jailbreak prompt is making it hallucinate a lot. Any recommendations for fully uncensored LLM?
2026-03-02T22:39:40
https://www.reddit.com/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/
Safe_Location9897
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7p2h
false
null
t3_1rj7p2h
/r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/
false
false
self
1
null
The biggest pain in local fine-tuning isn't training - it's everything around it
1
I've been working on local LLM fine-tuning for a few months and I keep hitting the same problems that have nothing to do with the actual training. **Data prep is a mess.** Every project starts with me manually formatting data into JSONL, guessing at splits, hoping I didn't introduce duplicates. There's no versioning. ...
2026-03-02T22:25:12
https://www.reddit.com/r/LocalLLaMA/comments/1rj7bvo/the_biggest_pain_in_local_finetuning_isnt/
Critical_Letter_7799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj7bvo
false
null
t3_1rj7bvo
/r/LocalLLaMA/comments/1rj7bvo/the_biggest_pain_in_local_finetuning_isnt/
false
false
self
1
null
The biggest pain in local fine-tuning isn't training - it's everything around it
1
2026-03-02T22:21:53
https://www.reddit.com/r/LocalLLaMA/comments/1rj78mn/the_biggest_pain_in_local_finetuning_isnt/
Critical_Letter_7799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj78mn
false
null
t3_1rj78mn
/r/LocalLLaMA/comments/1rj78mn/the_biggest_pain_in_local_finetuning_isnt/
false
false
self
1
null
Qwen3.5-122B-A10B-Q8 handling the car wash question like a champ! 9 T/s on the 2x agx orin 1x3090 RPC mesh!
1
85k context, high volume of reasoning for that question but that makes sense. i find 9t,s highly usable. Another win for the Clarkson jetson lab!
2026-03-02T22:19:50
https://v.redd.it/wgd9fdopipmg1
braydon125
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/
1970-01-01T00:00:00
0
{}
1rj76pb
false
null
t3_1rj76pb
/r/LocalLLaMA/comments/1rj76pb/qwen35122ba10bq8_handling_the_car_wash_question/
false
false
https://external-preview…3f0f53041a602903
1
{'images': [{'source': {'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKED8VdMkOUbP3yCTamJRagJLpbS.png?format=pjpg&auto=webp&s=30a7b6241976f0c0eac8cc6caae843faa39b4578', 'width': 405, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/MHV6MjZ2b3BpcG1nMZtiF550ubXjfviyIKE...
What exactly can I use small (2-3B) AI models for in mobiles?
1
I recently installed the Locally AI app. I’ve seen so many open source models released for use in mobile phones. I installed Qwen 3, LFM 2.5 and Gemma 3n. The answers they produce for technical engineering questions are so generic that I don’t see a point to use them. I’m curious to know the use case of these 2-3B pa...
2026-03-02T22:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1rj71wv/what_exactly_can_i_use_small_23b_ai_models_for_in/
Sylverster_Stalin_69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj71wv
false
null
t3_1rj71wv
/r/LocalLLaMA/comments/1rj71wv/what_exactly_can_i_use_small_23b_ai_models_for_in/
false
false
self
1
null
Cheap ai api services
0
I found a site offering cheap Veo 3 and LLM models, and they’ll be launching soon. I like it for now. If anyone is interested, send me a DM and I can share it with you too.
2026-03-02T22:07:32
https://www.reddit.com/r/LocalLLaMA/comments/1rj6val/cheap_ai_api_services/
PromotionEuphoric509
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1rj6val
false
null
t3_1rj6val
/r/LocalLLaMA/comments/1rj6val/cheap_ai_api_services/
false
false
self
0
null