Apertus tool parser
In the description you said Apertus supports tools and that supports vLLM.
If you would like to enable tools in vLLM you need to provide the following 3 parameters:
--enable-auto-tool-choice
--tool-call-parser
--chat-template
Here it is the example for Llama from vLLM
https://docs.vllm.ai/en/latest/features/tool_calling.html
chat-template seems to be here: https://github.com/swiss-ai/apertus-format/blob/main/src/templates/chat_template.jinja (please correct me if wrong)
But what about the tool-call-parser?
Any help here?
Thanks!
Thanks for the info! we were able to modify the chat_template and created our own Apertus tool parser. it's working good.. We are able to use it with Semantic Kernel (Python). Apertus is calling the tools correctly! great job! :)
@frsodano go ahead and share yours if you'd like, we will release and integrate into the inference engines when it's fully supported, right now the model rather responds to the format in the prompt than the format it was trained on (as we still need to do more training for tooling) ;)
We modified the chat_template and the llama3_json parser from vLLM to make it works with Apertus. We are solving a bug we found for Tools not having parameters, but for the rest is working fine. We are now testing it properly and then we will share it. Sharing is Caring. :)