Spaces:
Running
Running
title: README | |
emoji: π | |
colorFrom: yellow | |
colorTo: red | |
sdk: static | |
pinned: false | |
<style> | |
img { | |
width: 15%; | |
height: 15%; | |
} | |
</style> | |
 | |
# Digital Clockwork | |
Hello! We are a modern-day digital company where we operate like clockwork. Currently, our dedicated two-man team is scrambling to build up our personal toolset ( yes, we'll be sharing every bit of it `<3` ) so can can bring on the next-level tools. | |
For now we've only the most meager of offerings by our junior, and we're happy to say that granular GGUF quantization within Colab for GGUF is now easy-peasy ( No, we did not magic together a super-model-glue. It still needs to be a model llama.cpp supports. ) | |
[Colab - llama.cpp Within Colab](https://colab.research.google.com/drive/1d3swlyfnubhq8tCfoNGqaCFa5lpzBJvZ?usp=sharing) |