Apertus tool parser

#18
by frsodano - opened

In the description you said Apertus supports tools and that supports vLLM.

If you would like to enable tools in vLLM you need to provide the following 3 parameters:

--enable-auto-tool-choice
--tool-call-parser
--chat-template

Here it is the example for Llama from vLLM

https://docs.vllm.ai/en/latest/features/tool_calling.html

chat-template seems to be here: https://github.com/swiss-ai/apertus-format/blob/main/src/templates/chat_template.jinja (please correct me if wrong)

But what about the tool-call-parser?

Any help here?

Thanks!

Swiss AI Initiative org

@frsodano hey, tool use isn't fully supported, the model has been trained on initial data so it has capabilities but we haven't integrated into inference engines yet as we're still training for e.g. tooling

Thanks for the info! we were able to modify the chat_template and created our own Apertus tool parser. it's working good.. We are able to use it with Semantic Kernel (Python). Apertus is calling the tools correctly! great job! :)

Hi all!- we d also be highly interested to get tools working- @frsodano any chance to get details about your solution? 😇 Br tom

Swiss AI Initiative org

@frsodano go ahead and share yours if you'd like, we will release and integrate into the inference engines when it's fully supported, right now the model rather responds to the format in the prompt than the format it was trained on (as we still need to do more training for tooling) ;)

We modified the chat_template and the llama3_json parser from vLLM to make it works with Apertus. We are solving a bug we found for Tools not having parameters, but for the rest is working fine. We are now testing it properly and then we will share it. Sharing is Caring. :)

Sign up or log in to comment