mirror of
https://github.com/QYG2297248353/appstore-1panel.git
synced 2025-02-12 06:00:31 +08:00
97 lines
3.0 KiB
Markdown
97 lines
3.0 KiB
Markdown
# Ollama
|
||
|
||
一站式大模型部署平台
|
||
|
||
<img height="128px" src="https://file.lifebus.top/apps/ollama/logo.png" width="128px"/>
|
||
|
||
![](https://img.shields.io/badge/%E6%96%B0%E7%96%86%E8%90%8C%E6%A3%AE%E8%BD%AF%E4%BB%B6%E5%BC%80%E5%8F%91%E5%B7%A5%E4%BD%9C%E5%AE%A4-%E6%8F%90%E4%BE%9B%E6%8A%80%E6%9C%AF%E6%94%AF%E6%8C%81-blue)
|
||
|
||
## 模型列表
|
||
|
||
Ollama支持可用的型号列表 [ollama.com/library](https://ollama.com/library 'Ollama 模型库)
|
||
|
||
下面是一些可以下载的示例模型:
|
||
|
||
| Model | Parameters | Size | Download |
|
||
|--------------------|------------|-------|----------------------------------|
|
||
| DeepSeek-R1 | 7B | 4.7GB | `ollama run deepseek-r1` |
|
||
| DeepSeek-R1 | 671B | 404GB | `ollama run deepseek-r1:671b` |
|
||
| Llama 3.3 | 70B | 43GB | `ollama run llama3.3` |
|
||
| Llama 3.2 | 3B | 2.0GB | `ollama run llama3.2` |
|
||
| Llama 3.2 | 1B | 1.3GB | `ollama run llama3.2:1b` |
|
||
| Llama 3.2 Vision | 11B | 7.9GB | `ollama run llama3.2-vision` |
|
||
| Llama 3.2 Vision | 90B | 55GB | `ollama run llama3.2-vision:90b` |
|
||
| Llama 3.1 | 8B | 4.7GB | `ollama run llama3.1` |
|
||
| Llama 3.1 | 405B | 231GB | `ollama run llama3.1:405b` |
|
||
| Phi 4 | 14B | 9.1GB | `ollama run phi4` |
|
||
| Phi 3 Mini | 3.8B | 2.3GB | `ollama run phi3` |
|
||
| Gemma 2 | 2B | 1.6GB | `ollama run gemma2:2b` |
|
||
| Gemma 2 | 9B | 5.5GB | `ollama run gemma2` |
|
||
| Gemma 2 | 27B | 16GB | `ollama run gemma2:27b` |
|
||
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
||
| Moondream 2 | 1.4B | 829MB | `ollama run moondream` |
|
||
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
||
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
|
||
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
||
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
||
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
||
| Solar | 10.7B | 6.1GB | `ollama run solar` |
|
||
|
||
> 您应该至少有8 GB的RAM来运行7B型号,16 GB来运行13B型号,32 GB来运行33B型号。
|
||
|
||
## 基本命令
|
||
|
||
### 拉取一个模型
|
||
|
||
```sh
|
||
ollama pull llama3.2
|
||
```
|
||
|
||
此命令也可用于更新本地模型。仅拉取差异。
|
||
|
||
### 移除一个模型
|
||
|
||
```sh
|
||
ollama rm llama3.2
|
||
```
|
||
|
||
### 复制一个模型
|
||
|
||
```sh
|
||
ollama cp llama3.2 my-model
|
||
```
|
||
|
||
### 显示模型信息
|
||
|
||
```sh
|
||
ollama show llama3.2
|
||
```
|
||
|
||
### 列出你电脑上的所有模型
|
||
|
||
```sh
|
||
ollama list
|
||
```
|
||
|
||
### 列出当前已加载的模型
|
||
|
||
```sh
|
||
ollama ps
|
||
```
|
||
|
||
### 停止正在运行的模型
|
||
|
||
```sh
|
||
ollama stop llama3.2
|
||
```
|
||
|
||
### 启动 Ollama
|
||
|
||
```sh
|
||
ollama serve
|
||
```
|
||
|
||
---
|
||
|
||
![Ms Studio](https://file.lifebus.top/imgs/ms_blank_001.png)
|