• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Is it feasible to build your own "Jarvis" yet?

Joined
May 11, 2025
Messages
78 (3.00/day)
System Name ReactorOne
Processor AMD 9800X3D
Motherboard MSI MAG X870E TOMAHAWK WIFI
Cooling Noctua NH-D15 chromax.black
Memory G.SKILL Flare X5 32GB DDR5-6000 CL28
Video Card(s) PNY RTX 4080 Super 16GB Verto OC
Storage 2TB Samsung 990 Pro / 2TB WD SN850 / 8TB WD Red Plus
Display(s) LG 32" 1440p 16:9 165hz
Case Fractal S
Audio Device(s) Aune X1S Anniversary / Edifier R1700BT
Power Supply Dark Power 13 1000W
Mouse Evoluent VerticalMouse 4
Keyboard Corsair K65 Plus Wireless 75% Mechanical, Hotswappable Switches
Software Windows 11 Pro
No, not the true AI part.

Basically, a digital assistant on a local server.

What would it be for?
  • Controlling smart devices in the home, (light bulbs, clothes washer, outlets, thermostat, robo-vacuum etc)
  • Accessing your digital notes and itinerary and reminding you about upcoming deadlines and events
  • Adding things to your digital notes and itinerary, without embellishment
  • Remind you to do seasonal chores based on weather conditions

Ideally this thing would be functional offline, and would only access specific web-pages if they are available and as-needed. I guess part of the question then is also, "Can you license an AI model for offline use and without a subscription fee."
 
DeepSeek offers an open-source model that is supposed to be able to work offline, local, and is free.
I've never used it, but theoretically it is quite capable. Does it have a running memoy of tasks it will remember for later, or is every session a new blank slate? Does it keep track of the current time and will it wake up and tell you reminders at specific times without you remembering to ask it? Will it be able to guess the current typical weather offline? These are questions I don't know.
Theoretically you can train AI to do what you want it to do. Especially if it is open source and has a community of people testing it out.

Edit:
I asked the online vesion of deepseek some questions. Here is the result:

>do you remember our conversations?

Currently, I don’t have memory retention between conversations—each time you start a new chat, I don’t recall our previous interactions. However, I can reference details you provide in the current conversation to keep things consistent.

If you’d like me to remember something for this session, just let me know, and I’ll do my best to keep track within this chat!

How can I assist you today?

>can you tell me something at a specific time?

Currently, I don’t have the ability to schedule reminders or trigger actions at specific times (like sending a message later). However, you can:

1. **Set your own reminder** (phone alarm, calendar, etc.) and return to chat with me when the time comes.
2. **Ask me something now**, and I’ll respond immediately—for example, a fun fact, a quote, or advice!

What would you like me to share? Or tell me the topic/time, and I’ll help brainstorm solutions! ⏰

*(Note: If this is about productivity, I’m happy to suggest tools like Google Calendar, Todoist, or phone alarms!)*

>does your offline model have either of those features?

Great question! My **offline model** (the version you might run locally, like DeepSeek-V3 or similar open-weight models) also **does not** have built-in features for:
- **Scheduled reminders/alerts** (no native ability to "ping" you at a set time).
- **Memory retention** between sessions (unless manually saved/loaded via external tools).

### Why?
1. **No active connectivity**: Offline models lack internet access or background processes to trigger timed actions.
2. **Session-based**: Like the online version, they typically reset memory after each session unless integrated with a custom app/database.

### Workarounds:
- **External tools**: Pair the model with a scheduler (e.g., cron jobs, Tasker, or scripts) to automate prompts at specific times.
- **Manual save/load**: Some local UIs (like Ollama, LM Studio) allow saving chat history to "remember" context across sessions.

Would you like help brainstorming a specific setup? For example, I can suggest how to combine a local model with a calendar API or script!



....

So, the offline model could be tweaked to do it; and odds are someone has already done it and has written a guide.

....

And then I said:

>thank you! i asked because someone was wondering

You're very welcome! I’m glad you asked—it’s a great question, and it helps clarify what’s possible (and what’s still sci-fi… for now!).

If that person (or you!) ever wants to explore creative ways to **approximate** these features—like combining local models with other tools—feel free to ping me. I’d be happy to brainstorm ideas!

Otherwise, hope you both have an awesome day! ✨

*(And remember: I’m always here for instant answers, even if I can’t yet whisper them at 3 AM.)*


....

Ha Ha!
 
Last edited:
Wow you went above and beyond answering that question, thanks! That's pretty neat that there is a free offline version.

I just looked at the system requirements. It needs a minimum of 8 physical cores and 16 threads on the CPU. Sounds like I'd need to go from the 9800X3D to the 9950X3D for best results. This makes me realize this sucker would be an energy hog. Maybe they can be configured to be mostly dormant until called for though.

Here is a related article I found on AI power consumption: https://www.technologyreview.com/20...t-not-be-such-good-news-for-energy-after-all/

I don't know why I didn't think of asking an AI about this lol. I just tried Grok which I can access without login. Here is the first result:

Yes, there are several AI models and frameworks that can be used locally as a digital assistant without a monthly subscription fee. These solutions run on your own hardware, ensuring privacy and eliminating recurring costs after initial setup. Below are some of the best options, based on current trends and tools available for local AI deployment:

1.​


  • Description: Ollama is an open-source framework designed to run large language models (LLMs) locally on your computer. It supports models like Llama 3.2, Gemma 2, and Mistral, which can function as digital assistants for tasks like answering questions, task automation, or content generation.
  • Features:
    • Runs entirely offline, requiring no internet connection after model download.
    • Supports a variety of open-source models optimized for local use.
    • Easy setup with minimal technical knowledge required.
    • Compatible with platforms like Windows, macOS, and Linux.
  • Hardware Requirements: Works on consumer-grade hardware like laptops or desktops with at least 8GB of RAM for smaller models, though 16GB+ is recommended for better performance.
  • Cost: Free to use, with no subscription fees. Only hardware and electricity costs apply.
  • How to Use: Install Ollama, download a compatible model (e.g., Llama 3.2), and interact via a command-line interface or integrate with a custom UI.
  • Example Use Case: Use Llama 3.2 to create a local assistant that answers queries, summarizes documents, or automates simple workflows.


BTW, I just had a fun idea for a LLM instance. I could give it all the books in a series (in ebook form) so that it can be the ultimate reference.
 
Jarvis, inform this user about the existence of Home Assistant.
Good mention!

Grok actually included it in part 5 of its response. I tried just pasting the entire response earlier but apparently it hits a character limit or something.


### 5. **Home Assistant with Local AI Integration**
- **Description**: Home Assistant is an open-source platform for smart home control that can integrate local AI models (e.g., via Ollama or custom scripts) to act as a digital assistant for home automation.
- **Features**:
- Controls smart devices (lights, thermostats) offline.
- Can integrate with local LLMs for voice or text-based commands.
- Highly customizable with a focus on privacy.
- **Hardware Requirements**: Runs on a Raspberry Pi or similar edge devices for basic setups, with more powerful hardware for AI integration.
- **Cost**: Free, with optional hardware costs (e.g., Raspberry Pi).
- **How to Use**: Set up Home Assistant on a local server, integrate an LLM like Llama via Ollama, and use it for voice or text-based home control.[](https://geekyants.com/blog/building-a-smart-assistant-without-cloud-the-future-of-local-ai)
- **Example Use Case**: Create a privacy-focused assistant like “Harmony” to control home devices without cloud data transmission.
 
Back
Top