TimmyOVO commited on
Commit
f858b6d
Β·
verified Β·
1 Parent(s): 8c2796c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +283 -3
README.md CHANGED
@@ -1,3 +1,283 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - rednote-hilab/dots.ocr
5
+ base_model_relation: quantized
6
+ ---
7
+ # deepseek-ocr.rs πŸš€
8
+
9
+ Rust implementation of the DeepSeek-OCR inference stack with a fast CLI and an OpenAI-compatible HTTP server. The workspace packages multiple OCR backends, prompt tooling, and a serving layer so you can build document understanding pipelines that run locally on CPU, Apple Metal, or (alpha) NVIDIA CUDA GPUs.
10
+
11
+ > δΈ­ζ–‡ζ–‡ζ‘£θ―·ηœ‹ [README_CN.md](README_CN.md)。
12
+
13
+ > Want ready-made binaries? Latest macOS (Metal-enabled) and Windows bundles live in the [build-binaries workflow artifacts](https://github.com/TimmyOVO/deepseek-ocr.rs/actions/workflows/build-binaries.yml). Grab them from the newest green run.
14
+
15
+ ## Choosing a Model πŸ”¬
16
+
17
+ | Model | Memory footprint* | Best on | When to pick it |
18
+ | --- | --- | --- | --- |
19
+ | **DeepSeek‑OCR** | **β‰ˆ6.3GB** FP16 weights, **β‰ˆ13GB** RAM/VRAM with cache & activations (512-token budget) | Apple Silicon + Metal (FP16), high-VRAM NVIDIA GPUs, 32GB+ RAM desktops | Highest accuracy, SAM+CLIP global/local context, MoE DeepSeek‑V2 decoder (3B params, ~570M active per token). Use when latency is secondary to quality. |
20
+ | **PaddleOCR‑VL** | **β‰ˆ4.7GB** FP16 weights, **β‰ˆ9GB** RAM/VRAM with cache & activations | 16GB laptops, CPU-only boxes, mid-range GPUs | Dense 0.9B Ernie decoder with SigLIP vision tower. Faster startup, lower memory, great for batch jobs or lightweight deployments. |
21
+ | **DotsOCR** | **β‰ˆ9GB** FP16 weights, but expect **30–50GB** RAM/VRAM for high-res docs due to huge vision tokens | Apple Silicon + Metal BF16, β‰₯24GB CUDA cards, or 64GB RAM CPU workstations | Unified VLM (DotsVision + Qwen2) that nails layout, reading order, grounding, and multilingual math if you can tolerate the latency and memory bill. |
22
+
23
+ \*Measured from the default FP16 safetensors. Runtime footprint varies with sequence length.
24
+
25
+ Guidance:
26
+
27
+ - **Need maximum fidelity, multi-region reasoning, or already have 16–24GB VRAM?** Use **DeepSeek‑OCR**. The hybrid SAM+CLIP tower plus DeepSeek‑V2 MoE decoder handles complex layouts best, but expect higher memory/latency.
28
+ - **Deploying to CPU-only nodes, 16GB laptops, or latency-sensitive services?** Choose **PaddleOCR‑VL**. Its dense Ernie decoder (18 layers, hidden 1024) activates fewer parameters per token and keeps memory under 10GB while staying close in quality on most docs.
29
+ - **Chasing reading-order accuracy, layout grounding, or multi-page multilingual PDFs on roomy hardware?** Pick **DotsOCR** with BF16 on Metal/CUDA. Prefill runs around 40–50 tok/s on M-series GPUs but can fall to ~12 tok/s on CPU because of the heavy vision tower.
30
+
31
+ ## Why Rust? πŸ’‘
32
+
33
+ The original DeepSeek-OCR ships as a Python + Transformers stackβ€”powerful, but hefty to deploy and awkward to embed. Rewriting the pipeline in Rust gives us:
34
+
35
+ - Smaller deployable artifacts with zero Python runtime or conda baggage.
36
+ - Memory-safe, thread-friendly infrastructure that blends into native Rust backends.
37
+ - Unified tooling (CLI + server) running on Candle + Rocket without the Python GIL overhead.
38
+ - Drop-in compatibility with OpenAI-style clients while tuned for single-turn OCR prompts.
39
+
40
+ ## Technical Stack βš™οΈ
41
+
42
+ - **Candle** for tensor compute, with Metal and CUDA backends and FlashAttention support.
43
+ - **Rocket** + async streaming for OpenAI-compatible `/v1/responses` and `/v1/chat/completions`.
44
+ - **tokenizers** (upstream DeepSeek release) wrapped by `crates/assets` for deterministic caching via Hugging Face and ModelScope mirrors.
45
+ - **Pure Rust vision/prompt pipeline** shared by CLI and server to avoid duplicated logic.
46
+
47
+ ## Advantages over the Python Release πŸ₯·
48
+
49
+ - Faster cold-start on Apple Silicon, lower RSS, and native binary distribution.
50
+ - Deterministic dual-source (Hugging Face + ModelScope) asset download + verification built into the workspace.
51
+ - Automatic single-turn chat compaction so OCR outputs stay stable even when clients send history.
52
+ - Ready-to-use OpenAI compatibility for tools like Open WebUI without adapters.
53
+
54
+ ## Highlights ✨
55
+
56
+ - **One repo, two entrypoints** – a batteries-included CLI for batch jobs and a Rocket-based server that speaks `/v1/responses` and `/v1/chat/completions`.
57
+ - **Works out of the box** – pulls model weights, configs, and tokenizer from whichever of Hugging Face or ModelScope responds fastest on first run.
58
+ - **Optimised for Apple Silicon** – optional Metal backend with FP16 execution for real-time OCR on laptops.
59
+ - **CUDA (alpha)** – experimental support via `--features cuda` + `--device cuda --dtype f16`; expect rough edges while we finish kernel coverage.
60
+ - **Intel MKL (preview)** – faster BLAS on x86 via `--features mkl` (install Intel oneMKL beforehand).
61
+ - **OpenAI client compatibility** – drop-in replacement for popular SDKs; the server automatically collapses chat history to the latest user turn for OCR-friendly prompts.
62
+
63
+ ## Model Matrix πŸ“¦
64
+
65
+ The workspace exposes three base model IDs plus DSQ-quantized variants for DeepSeek‑OCR, PaddleOCR‑VL, and DotsOCR:
66
+
67
+ | Model ID | Base Model | Precision | Suggested Use Case |
68
+ | --- | --- | --- | --- |
69
+ | `deepseek-ocr` | `deepseek-ocr` | FP16 (select via `--dtype`) | Full-fidelity DeepSeek‑OCR stack with SAM+CLIP + MoE decoder; use when you prioritise quality on capable Metal/CUDA/CPU hosts. |
70
+ | `deepseek-ocr-q4k` | `deepseek-ocr` | `Q4_K` | Tight VRAM, local deployments, and batch jobs that still want DeepSeek’s SAM+CLIP pipeline. |
71
+ | `deepseek-ocr-q6k` | `deepseek-ocr` | `Q6_K` | Day‑to‑day balance of quality and size on mid‑range GPUs. |
72
+ | `deepseek-ocr-q8k` | `deepseek-ocr` | `Q8_0` | Stay close to full‑precision quality with manageable memory savings. |
73
+ | `paddleocr-vl` | `paddleocr-vl` | FP16 (select via `--dtype`) | Default choice for lighter hardware; 0.9B Ernie + SigLIP tower with strong doc/table OCR and low latency. |
74
+ | `paddleocr-vl-q4k` | `paddleocr-vl` | `Q4_K` | Heavily compressed doc/table deployments with aggressive memory budgets. |
75
+ | `paddleocr-vl-q6k` | `paddleocr-vl` | `Q6_K` | Common engineering setups; blends accuracy and footprint. |
76
+ | `paddleocr-vl-q8k` | `paddleocr-vl` | `Q8_0` | Accuracy‑leaning deployments that still want a smaller footprint than FP16. |
77
+ | `dots-ocr` | `dots-ocr` | FP16 / BF16 (via `--dtype`) | DotsVision + Qwen2 VLM for high‑precision layout, reading order, grounding, and multilingual docs; expect high memory (30–50GB on large pages). |
78
+ | `dots-ocr-q4k` | `dots-ocr` | `Q4_K` | Sidecar DSQ snapshot over the DotsOCR baseline; reduces weight memory/compute while keeping the heavy vision token profile unchanged. |
79
+ | `dots-ocr-q6k` | `dots-ocr` | `Q6_K` | Recommended balance of size and quality when you already accept DotsOCR’s memory footprint but want cheaper weights. |
80
+ | `dots-ocr-q8k` | `dots-ocr` | `Q8_0` | Accuracy‑leaning DotsOCR deployment that stays close to FP16/BF16 quality with modest memory savings. |
81
+
82
+
83
+ ## Quick Start 🏁
84
+
85
+ ### Prerequisites
86
+
87
+ - Rust 1.78+ (edition 2024 support)
88
+ - Git
89
+ - Optional: Apple Silicon running macOS 13+ for Metal acceleration
90
+ - Optional: CUDA 12.2+ toolkit + driver for experimental NVIDIA GPU acceleration on Linux/Windows
91
+ - Optional: Intel oneAPI MKL for preview x86 acceleration (see below)
92
+ - (Recommended) Hugging Face account with `HF_TOKEN` when pulling from the `deepseek-ai/DeepSeek-OCR` repo (ModelScope is used automatically when it’s faster/reachable).
93
+
94
+ ### Clone the Workspace
95
+
96
+ ```bash
97
+ git clone https://github.com/TimmyOVO/deepseek-ocr.rs.git
98
+ cd deepseek-ocr.rs
99
+ cargo fetch
100
+ ```
101
+
102
+ ### Model Assets
103
+
104
+ The first invocation of the CLI or server downloads the config, tokenizer, and `model-00001-of-000001.safetensors` (~6.3GB) into `DeepSeek-OCR/`. To prefetch manually:
105
+
106
+ ```bash
107
+ cargo run -p deepseek-ocr-cli --release -- --help # dev profile is extremely slow; always prefer --release
108
+ ```
109
+
110
+ > Always include `--release` when running from source; debug builds on this model are extremely slow.
111
+ Set `HF_HOME`/`HF_TOKEN` if you store Hugging Face caches elsewhere (ModelScope downloads land alongside the same asset tree). The full model package is ~6.3GB on disk and typically requires ~13GB of RAM headroom during inference (model + activations).
112
+
113
+ ## Configuration & Overrides πŸ—‚οΈ
114
+
115
+ The CLI and server share the same configuration. On first launch we create a `config.toml` populated with defaults; later runs reuse it so both entrypoints stay in sync.
116
+
117
+ | Platform | Config file (default) | Model cache root |
118
+ | --- | --- | --- |
119
+ | Linux | `~/.config/deepseek-ocr/config.toml` | `~/.cache/deepseek-ocr/models/<id>/…` |
120
+ | macOS | `~/Library/Application Support/deepseek-ocr/config.toml` | `~/Library/Caches/deepseek-ocr/models/<id>/…` |
121
+ | Windows | `%APPDATA%\deepseek-ocr\config.toml` | `%LOCALAPPDATA%\deepseek-ocr\models\<id>\…` |
122
+
123
+ - Override the location with `--config /path/to/config.toml` (available on both CLI and server). Missing files are created automatically.
124
+ - Each `[models.entries."<id>"]` record can point to custom `config`, `tokenizer`, or `weights` files. When omitted we fall back to the cache directory above and download/update assets as required.
125
+ - Runtime values resolve in this order: command-line flags β†’ values stored in `config.toml` β†’ built-in defaults. The HTTP API adds a final layer where request payload fields (for example `max_tokens`) override everything else for that call.
126
+
127
+ The generated file starts with the defaults below; adjust them to persistently change behaviour:
128
+
129
+ ```toml
130
+ [models]
131
+ active = "deepseek-ocr"
132
+
133
+ [models.entries.deepseek-ocr]
134
+
135
+ [inference]
136
+ device = "cpu"
137
+ template = "plain"
138
+ base_size = 1024
139
+ image_size = 640
140
+ crop_mode = true
141
+ max_new_tokens = 512
142
+ use_cache = true
143
+
144
+ [server]
145
+ host = "0.0.0.0"
146
+ port = 8000
147
+ ```
148
+
149
+ - `[models]` picks the active model and lets you add more entries (each entry can point to its own config/tokenizer/weights).
150
+ - `[inference]` controls notebook-friendly defaults shared by the CLI and server (device, template, vision sizing, decoding budget, cache usage).
151
+ - `[server]` sets the network binding and the model identifier reported by `/v1/models`.
152
+
153
+ See `crates/cli/README.md` and `crates/server/README.md` for concise override tables.
154
+
155
+ ## Benchmark Snapshot πŸ“Š
156
+
157
+ Single-request Rust CLI (Accelerate backend on macOS) compared with the reference Python pipeline on the same prompt and image:
158
+
159
+ | Stage | ref total (ms) | ref avg (ms) | python total | python/ref |
160
+ |---------------------------------------------------|----------------|--------------|--------------|------------|
161
+ | Decode – Overall (`decode.generate`) | 30077.840 | 30077.840 | 56554.873 | 1.88x |
162
+ | Decode – Token Loop (`decode.iterative`) | 26930.216 | 26930.216 | 39227.974 | 1.46x |
163
+ | Decode – Prompt Prefill (`decode.prefill`) | 3147.337 | 3147.337 | 5759.684 | 1.83x |
164
+ | Prompt – Build Tokens (`prompt.build_tokens`) | 0.466 | 0.466 | 45.434 | 97.42x |
165
+ | Prompt – Render Template (`prompt.render`) | 0.005 | 0.005 | 0.019 | 3.52x |
166
+ | Vision – Embed Images (`vision.compute_embeddings`)| 6391.435 | 6391.435 | 3953.459 | 0.62x |
167
+ | Vision – Prepare Inputs (`vision.prepare_inputs`) | 62.524 | 62.524 | 45.438 | 0.73x |
168
+
169
+ ## Command-Line Interface πŸ–₯️
170
+
171
+ Build and run directly from the workspace:
172
+
173
+ ```bash
174
+ cargo run -p deepseek-ocr-cli --release -- \
175
+ --prompt "<image>\n<|grounding|>Convert this receipt to markdown." \
176
+ --image baselines/sample/images/test.png \
177
+ --device cpu --max-new-tokens 512
178
+ ```
179
+
180
+ > Tip: `--release` is required for reasonable throughput; debug builds can be 10x slower.
181
+
182
+ > macOS tip: append `--features metal` to the `cargo run`/`cargo build` commands to compile with Accelerate + Metal backends.
183
+ >
184
+ > CUDA tip (Linux/Windows): append `--features cuda` and run with `--device cuda --dtype f16` to target NVIDIA GPUsβ€”feature is still alpha, so be ready for quirks.
185
+ >
186
+ > Intel MKL preview: install Intel oneMKL, then build with `--features mkl` for faster CPU matmuls on x86.
187
+
188
+ Install the CLI as a binary:
189
+
190
+ ```bash
191
+ cargo install --path crates/cli
192
+ deepseek-ocr-cli --help
193
+ ```
194
+
195
+ Key flags:
196
+
197
+ - `--prompt` / `--prompt-file`: text with `<image>` slots
198
+ - `--image`: path(s) matching `<image>` placeholders
199
+ - `--device` and `--dtype`: choose `metal` + `f16` on Apple Silicon or `cuda` + `f16` on NVIDIA GPUs
200
+ - `--max-new-tokens`: decoding budget
201
+ - Sampling controls: `--do-sample`, `--temperature`, `--top-p`, `--top-k`, `--repetition-penalty`, `--no-repeat-ngram-size`, `--seed`
202
+ - By default decoding stays deterministic (`do_sample=false`, `temperature=0.0`, `no_repeat_ngram_size=20`)
203
+ - To use stochastic sampling set `--do-sample true --temperature 0.8` (and optionally adjust the other knobs)
204
+
205
+ ### Switching Models
206
+
207
+ The autogenerated `config.toml` now lists three entries:
208
+
209
+ - `deepseek-ocr` (default) – the original DeepSeek vision-language stack.
210
+ - `paddleocr-vl` – the PaddleOCR-VL 0.9B SigLIP + Ernie release.
211
+ - `dots-ocr` – the Candle port of dots.ocr with DotsVision + Qwen2 (use BF16 on Metal/CUDA if possible; see the release matrix for memory notes).
212
+
213
+ Pick which one to load via `--model`:
214
+
215
+ ```bash
216
+ deepseek-ocr-cli --model paddleocr-vl --prompt "<image> Summarise"
217
+ ```
218
+
219
+ The CLI (and server) will download the matching config/tokenizer/weights from the appropriate repository (`deepseek-ai/DeepSeek-OCR`, `PaddlePaddle/PaddleOCR-VL`, or `dots-ocr`) into your cache on first use. You can still override paths with `--model-config`, `--tokenizer`, or `--weights` if you maintain local fine-tunes.
220
+
221
+ ## HTTP Server ☁️
222
+
223
+ Launch an OpenAI-compatible endpoint:
224
+
225
+ ```bash
226
+ cargo run -p deepseek-ocr-server --release -- \
227
+ --host 0.0.0.0 --port 8000 \
228
+ --device cpu --max-new-tokens 512
229
+ ```
230
+
231
+ > Keep `--release` on the server as well; the debug profile is far too slow for inference workloads.
232
+ > macOS tip: add `--features metal` to the `cargo run -p deepseek-ocr-server` command when you want the server binary to link against Accelerate + Metal (and pair it with `--device metal` at runtime).
233
+ >
234
+ > CUDA tip: add `--features cuda` and start the server with `--device cuda --dtype f16` to offload inference to NVIDIA GPUs (alpha-quality support).
235
+ >
236
+ > Intel MKL preview: install Intel oneMKL before building with `--features mkl` to accelerate CPU workloads on x86.
237
+
238
+ Notes:
239
+
240
+ - Use `data:` URLs or remote `http(s)` links; local paths are rejected.
241
+ - The server collapses multi-turn chat inputs to the latest user message to keep prompts OCR-friendly.
242
+ - Works out of the box with tools such as [Open WebUI](https://github.com/open-webui/open-webui) or any OpenAI-compatible clientβ€”just point the base URL to your server (`http://localhost:8000/v1`) and select either the `deepseek-ocr` or `paddleocr-vl` model ID exposed in `/v1/models`.
243
+ - Adjust the request body limit with Rocket config if you routinely send large images.
244
+
245
+ ![Open WebUI connected to deepseek-ocr.rs](./baselines/sample_1.png)
246
+
247
+ ## GPU Acceleration ⚑
248
+
249
+ - **Metal (macOS 13+ Apple Silicon)** – pass `--device metal --dtype f16` and build binaries with `--features metal` so Candle links against Accelerate + Metal.
250
+ - **CUDA (alpha, NVIDIA GPUs)** – install CUDA 12.2+ toolkits, build with `--features cuda`, and launch the CLI/server with `--device cuda --dtype f16`; still experimental.
251
+ - **Intel MKL (preview)** – install Intel oneMKL and build with `--features mkl` to speed up CPU workloads on x86.
252
+ - For either backend, prefer release builds (e.g. `cargo build --release -p deepseek-ocr-cli --features metal|cuda`) to maximise throughput.
253
+ - Combine GPU runs with `--max-new-tokens` and crop tuning flags to balance latency vs. quality.
254
+
255
+ ## Repository Layout πŸ—‚οΈ
256
+
257
+ - `crates/core` – shared inference pipeline, model loaders, conversation templates.
258
+ - `crates/cli` – command-line frontend (`deepseek-ocr-cli`).
259
+ - `crates/server` – Rocket server exposing OpenAI-compatible endpoints.
260
+ - `crates/assets` – asset management (configuration, tokenizer, Hugging Face + ModelScope download helpers).
261
+ - `baselines/` – reference inputs and outputs for regression testing.
262
+
263
+ Detailed CLI usage lives in [`crates/cli/README.md`](crates/cli/README.md). The server’s OpenAI-compatible interface is covered in [`crates/server/README.md`](crates/server/README.md).
264
+
265
+ ## Troubleshooting πŸ› οΈ
266
+
267
+ - **Where do assets come from?** – downloads automatically pick between Hugging Face and ModelScope based on latency; the CLI prints the chosen source for each file.
268
+ - **Slow first response** – model load and GPU warm-up (Metal/CUDA alpha) happen on the initial request; later runs are faster.
269
+ - **Large image rejection** – increase Rocket JSON limits in `crates/server/src/main.rs` or downscale the input.
270
+
271
+ ## Roadmap πŸ—ΊοΈ
272
+
273
+ - βœ… Apple Metal backend with FP16 support and CLI/server parity on macOS.
274
+ - βœ… NVIDIA CUDA backend (alpha) – build with `--features cuda`, run with `--device cuda --dtype f16` for Linux/Windows GPUs; polishing in progress.
275
+ - πŸ”„ **Parity polish** – finish projector normalisation + crop tiling alignment; extend intermediate-tensor diff suite beyond the current sample baseline.
276
+ - πŸ”„ **Grounding & streaming** – port the Python post-processing helpers (box extraction, markdown polish) and refine SSE streaming ergonomics.
277
+ - πŸ”„ **Cross-platform acceleration** – continue tuning CUDA kernels, add automatic device detection across CPU/Metal/CUDA, and publish opt-in GPU benchmarks.
278
+ - πŸ”„ **Packaging & Ops** – ship binary releases with deterministic asset checksums, richer logging/metrics, and Helm/docker references for server deploys.
279
+ - πŸ”œ **Structured outputs** – optional JSON schema tools for downstream automation once parity gaps close.
280
+
281
+ ## License πŸ“„
282
+
283
+ This repository inherits the licenses of its dependencies and the upstream DeepSeek-OCR model. Refer to `DeepSeek-OCR/LICENSE` for model terms and apply the same restrictions to downstream use.