Update README.md
Browse files
README.md
CHANGED
@@ -55,7 +55,7 @@ Invoke the llama.cpp server or the CLI.
|
|
55 |
|
56 |
### CLI:
|
57 |
```bash
|
58 |
-
llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "
|
59 |
```
|
60 |
|
61 |
### Server:
|
@@ -77,7 +77,7 @@ cd llama.cpp && LLAMA_CURL=1 make
|
|
77 |
|
78 |
Step 3: Run inference through the main binary.
|
79 |
```
|
80 |
-
./llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "
|
81 |
```
|
82 |
or
|
83 |
```
|
|
|
55 |
|
56 |
### CLI:
|
57 |
```bash
|
58 |
+
llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "Build a Blender model of Starship"
|
59 |
```
|
60 |
|
61 |
### Server:
|
|
|
77 |
|
78 |
Step 3: Run inference through the main binary.
|
79 |
```
|
80 |
+
./llama-cli --hf-repo hellork/BlenderLLM-IQ3_XXS-GGUF --hf-file blenderllm-iq3_xxs-imat.gguf -p "Write a Blender script to construct a Tie Fighter"
|
81 |
```
|
82 |
or
|
83 |
```
|