🍋 Lemonade Server Models
This document provides the models we recommend for use with Lemonade Server. Click on any model to learn more details about it, such as the Lemonade Recipe used to load the model.
Naming Convention
The format of each Lemonade name is a combination of the name in the base checkpoint and the backend where the model will run. So, if the base checkpoint is meta-llama/Llama-3.2-1B-Instruct
, and it has been optimized to run on Hybrid, the resulting name is Llama-3.2-3B-Instruct-Hybrid
.
Model Storage and Management
Lemonade Server relies on Hugging Face Hub to manage downloading and storing models on your system. By default, Hugging Face Hub downloads models to C:\Users\YOUR_USERNAME\.cache\huggingface\hub
.
For example, the Lemonade Server Llama-3.2-3B-Instruct-Hybrid
model will end up at C:\Users\YOUR_USERNAME\.cache\huggingface\hub\models--amd--Llama-3.2-1B-Instruct-awq-g128-int4-asym-fp16-onnx-hybrid
. If you want to uninstall that model, simply delete that folder.
You can change the directory for Hugging Face Hub by setting the HF_HOME
or HF_HUB_CACHE
environment variables.
Installing Additional Models
Once you’ve installed Lemonade Server, you can install any model on this list using the pull
command in the lemonade-server
CLI.
Example:
lemonade-server pull Qwen2.5-0.5B-Instruct-CPU
Note: lemonade-server
is a utility that is added to your PATH when you install Lemonade Server with the GUI installer.
If you are using Lemonade Server from a Python environment, use the lemonade-server-dev pull
command instead.
Supported Models
Hybrid
Llama-3.2-1B-Instruct-Hybrid
```bash
lemonade-server pull Llama-3.2-1B-Instruct-Hybrid
```
Llama-3.2-3B-Instruct-Hybrid
```bash
lemonade-server pull Llama-3.2-3B-Instruct-Hybrid
```
Phi-3-Mini-Instruct-Hybrid
```bash
lemonade-server pull Phi-3-Mini-Instruct-Hybrid
```
Qwen-1.5-7B-Chat-Hybrid
```bash
lemonade-server pull Qwen-1.5-7B-Chat-Hybrid
```
DeepSeek-R1-Distill-Llama-8B-Hybrid
```bash
lemonade-server pull DeepSeek-R1-Distill-Llama-8B-Hybrid
```
DeepSeek-R1-Distill-Qwen-7B-Hybrid
```bash
lemonade-server pull DeepSeek-R1-Distill-Qwen-7B-Hybrid
```
Mistral-7B-v0.3-Instruct-Hybrid
```bash
lemonade-server pull Mistral-7B-v0.3-Instruct-Hybrid
```
Llama-3.1-8B-Instruct-Hybrid
```bash
lemonade-server pull Llama-3.1-8B-Instruct-Hybrid
```
CPU
Qwen2.5-0.5B-Instruct-CPU
```bash
lemonade-server pull Qwen2.5-0.5B-Instruct-CPU
```
Llama-3.2-1B-Instruct-CPU
```bash
lemonade-server pull Llama-3.2-1B-Instruct-CPU
```
Llama-3.2-3B-Instruct-CPU
```bash
lemonade-server pull Llama-3.2-3B-Instruct-CPU
```
Phi-3-Mini-Instruct-CPU
```bash
lemonade-server pull Phi-3-Mini-Instruct-CPU
```
Qwen-1.5-7B-Chat-CPU
```bash
lemonade-server pull Qwen-1.5-7B-Chat-CPU
```
DeepSeek-R1-Distill-Llama-8B-CPU
```bash
lemonade-server pull DeepSeek-R1-Distill-Llama-8B-CPU
```
DeepSeek-R1-Distill-Qwen-7B-CPU
```bash
lemonade-server pull DeepSeek-R1-Distill-Qwen-7B-CPU
```