Automationbeginner

Jan AI Desktop, Open Source Private Chat For Local LLMs

Jan on Linux, Mac, and Windows for a fully local ChatGPT alternative, my install and verdict

Aditya Sharma··6 min read
Jan AI desktop showing chat with local Qwen model

Jan is the fully open-source desktop chat app for local LLMs. Apache 2.0, no telemetry by default, no account needed, and a UI that does not look like a fork of someone else's design. I ran it as my ChatGPT replacement for a week on the MacBook and the ThinkCentre. This is the install, the model setup, and the verdict.

What you'll build

Jan installed on your platform of choice, a local model downloaded and chatting, and the local API exposed for scripting. Roughly 15 minutes.

Jan AI on my MacBook Caption: Jan with Qwen 2.5 7B loaded and a long conversation visible.

Prerequisites

  • Mac (Apple Silicon or Intel), Windows 10/11, or Linux x86_64
  • 16GB RAM minimum for 7B-class models
  • 10GB free disk for the app and a couple of models
  • A connection for the initial model download

If you have less than 16GB RAM, stick with smaller models (Phi-3 Mini, Gemma 4 2B). The app handles them; the 7B+ models need the headroom.

Step 1, install Jan

Download from jan.ai. The installer is platform-native:

  • Mac: a signed dmg, drag to Applications
  • Windows: an MSI installer
  • Linux: AppImage or .deb
# Linux .deb
wget https://github.com/janhq/jan/releases/latest/download/jan-linux-amd64.deb
sudo dpkg -i jan-linux-amd64.deb

Jan installed on Linux

The Mac signed dmg opens cleanly with no Gatekeeper friction. The Linux .deb pulls in a few electron deps; AppImage is the lighter path.

Step 2, download a model

Open Jan, click Hub on the left sidebar. The hub shows curated models with one-click downloads. I picked Qwen 2.5 7B:

Jan model hub with Qwen 2.5 selected

Jan downloads to ~/jan/models/. Progress shows in the GUI; you can chat with another already-downloaded model while a download runs.

Step 3, start a chat

Go to Threads, click "New Chat", select the downloaded Qwen 2.5 7B from the model picker. Type:

Explain SOLID principles in software design with one example each.

Jan chat with Qwen response

First prompt has a 5-10 second cold start. After that, chat is responsive.

Step 4, customise the chat settings

Click the cog icon in the chat panel to adjust:

  • System prompt (the app's instructions to the model)
  • Temperature (0.7 default; drop to 0.3 for terse answers)
  • Max tokens (4096 default; bump for long-form work)
  • Context length (model-dependent)

Jan chat settings panel

Save the settings as a "Thread Template" if you want to reuse this configuration. I have one for "code review" with a low temperature and a custom system prompt.

Step 5, enable the local API

In Settings → Local API Server, toggle it on. Jan exposes an OpenAI-compatible API on localhost:1337 by default.

Jan local API server enabled

You can now point Python or any OpenAI SDK at http://localhost:1337/v1 for scripted use.

First run

A typical day combining the GUI and the API:

GUI: ad-hoc questions, reading, brainstorming
API: scripts that need bulk processing

[Jan running in tray, server toggled on]
[Python scripts call localhost:1337]
[GUI used for chat between API runs]

Combined GUI plus API workflow

For users who want both flexibility and a clean GUI, Jan beats Ollama-CLI alone.

What broke for me

Two specifics. First, the Linux AppImage on Ubuntu 24.04 needed --no-sandbox to launch the first time, similar to other electron apps under stricter AppArmor profiles. The error was a vague "could not initialize". chmod +x Jan-0.5.0.AppImage && ./Jan-0.5.0.AppImage --no-sandbox got me past it; subsequent launches were clean.

Second, model downloads silently used IPv6 by default on my Jio fibre line, which routed through a slower path. Forcing IPv4 with a system-level setting (or just disabling IPv6 in the network settings) made downloads roughly 3x faster. The hint was the download progress bar moving in tens of KB/s instead of MB/s; once I forced v4 routing, it jumped to 5MB/s.

What it costs

Item Cost
Jan AI app Free (Apache 2.0)
Models Free (per-model licence)
Disk (per model) 2-8GB
Electricity Standard

Jan is free with no commercial restrictions. Compared to LM Studio (which has a commercial-use clause), Jan is the friendlier licence for shipping inside a product.

When NOT to use this

Skip Jan if you only need a chat GUI and do not care about the API or licensing. LM Studio's UI is more polished for casual use; Jan's strength is the open-source story.

Skip if you need a server that runs without a GUI process. Jan's local API depends on the GUI app being open; for a true headless server, llama-server directly or Ollama as a systemd unit is the cleaner shape.

Indian operator angle

For Indian dev shops shipping AI features inside a product, Jan's Apache 2.0 licence is the cleanest local-LLM substrate I have evaluated. You can fork it, rebrand it, embed it in a client deliverable, and ship without any licensing review. LM Studio's commercial clause forces you into Jan or llama.cpp directly for product work.

For a small studio in Bengaluru I worked with, Jan was the right choice for a privacy-sensitive client where the model and the chat had to stay on the user's machine. The all-local guarantee plus the open licence let the studio commit to the deliverable without legal back-and-forth.

Related