• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Recent content by johnspack

  1. johnspack

    Do you use Linux?

    You might want to try and switch to x11 instead of wayland. Wayland is not fully mature yet and has all kinds of issues. For me it simply won't work right with my nvidia card, graphics get mucked up for things like virtualbox vms, and oh yeah, only half of my control panel came up. No...
  2. johnspack

    What does this breakthrough mean for the future of computing? Help me understand the new petahertz transistor.

    I'll just chuck this in here... been around for years: https://en.wikipedia.org/wiki/Optical_computing
  3. johnspack

    The Official Linux/Unix Desktop Screenshots Megathread

    Had to check it out... Kubuntu Plucky Puffin. 25.04. Nice new kernel this time!
  4. johnspack

    What local LLM-s you use?

    Don't know if anyone has noticed this or not but I seem get up to 20% better performance under linux....
  5. johnspack

    Can you guess Which game it is?

    Heh, my LLM says Cyberpunk... but it's probably wrong....
  6. johnspack

    What local LLM-s you use?

    Yep, using minicpm and the matching minicpm-mmproj-f16 model, many times faster for images. Quite the learning curve.... Now running minicpm ggml-model-f16 with mmproj-model-f16, and still really fast but smarter. Also can do handwriting recognition. Need to test that a bit more.
  7. johnspack

    What local LLM-s you use?

    Does anyone know the best text/vision model combo to use on a lower end computer? Currently trying gemma-3-12b-it-q6_k_l with gemma-12b-mmproj, but it's stupidly slow.
  8. johnspack

    What local LLM-s you use?

    Well that's nice, Koboldcpp now supports Gemma-3. Running gemma-3-4b-it-16bf and it's screaming fast. Bigger models still beat up on my system, but that's expected.
  9. johnspack

    What local LLM-s you use?

    How are you running Gemma? Won't run under Koboldcpp, or maybe my system.
  10. johnspack

    What local LLM-s you use?

    If anyone is looking for uncensored models, I found another one. https://huggingface.co/bartowski/Qwen_QwQ-32B-GGUF Here's the output from what is taiwan? Unless all non deepseek models are uncensored... I'm not even sure yet.... I'm prefering QwQ over DeepSeek now. It's just as fast, and...
  11. johnspack

    The Official Linux/Unix Desktop Screenshots Megathread

    Here's Arch doing something smart with Koboldcpp....
  12. johnspack

    What local LLM-s you use?

    As I mentioned... Deepseek uncensored is 3x faster for me now. I'm not paying for any models thank you.
  13. johnspack

    What local LLM-s you use?

    Heh, found out the hard way to use a very clean os install to run these. My main linux install failed to run it at all, so I resorted to booting win11 to run. Just tried my backup clean arch install, and I'm getting 3x the tokens/s as the windows, and I'm pretty sure the other arch when it...
  14. johnspack

    What local LLM-s you use?

    Not sure if this is getting closer or not.... https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Qwen-32B-Uncensored-GGUF
  15. johnspack

    What local LLM-s you use?

    If you want uncensored, then you want abliterated: https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-32B-abliterated-GGUF
Back
Top