Jordan Legg's picture

Jordan Legg PRO

takarajordan

AI & ML interests

Chief AI Officer @takara.ai. Diffusion, Inference optimisation and all things MultiModal.

Recent Activity

updated a dataset 2 days ago
takara-ai/HashGrad-10M
published a dataset 2 days ago
takara-ai/HashGrad-10M
new activity 5 days ago
takarajordan/CineDiffusion:16x9
View all activity

Organizations

Social Post Explorers's profile picture Cohere Labs Community's profile picture takara.ai's profile picture Hugging Face Discord Community's profile picture Intelligent Estate's profile picture open/ acc's profile picture Donut Earthers 🍩's profile picture

takarajordan's activity

New activity in takarajordan/CineDiffusion 5 days ago

16x9

1
1
#14 opened 3 months ago by
Hamed744
replied to their post 18 days ago
replied to their post 19 days ago
view reply

@ThomasTheMaker it's just the raw attention and transformer architecture in golang designed for serverless so performance will definitely be less than ggml and llama.cpp since it's not accelerated by GPU's but if you're into edge AI CPU only, this is the first, only and best way to compute attention.

Quantization can definitely be supported as it's just a math model!

posted an update 19 days ago
view post
Post
583
🎌 Two months in, https://github.com/takara-ai/go-attention has passed 429 stars on GitHub.

We built this library at takara.ai to bring attention mechanisms and transformer layers to Go β€” in a form that's lightweight, clean, and dependency-free.

We’re proud to say that every part of this project reflects what we set out to do.

- Pure Go β€” no external dependencies, built entirely on the Go standard library
- Core support for DotProductAttention and MultiHeadAttention
- Full transformer layers with LayerNorm, feed-forward networks, and residual connections
- Designed for edge, embedded, and real-time environments where simplicity and performance matter

Thank you to everyone who has supported this so far β€” the stars, forks, and feedback mean a lot.
  • 4 replies
Β·
posted an update 25 days ago
view post
Post
1568
AI research over coffee β˜•οΈ
No abstracts, just bullet points.
Start your day here: https://tldr.takara.ai
  • 1 reply
Β·
replied to samchain's post about 1 month ago
view reply

This is a pretty big update for sure. The models have improved significantly which is great for everyone involved, especially the end user. Those datasets look very promising as well!

replied to wassemgtk's post about 1 month ago
view reply

Sounds interesting, I’ll check it out!

replied to etemiz's post about 1 month ago
view reply

This is a really interesting post. I’ve been looking at the DeepSeek models for sure. This shows a pretty nice improvement, would love to see some example changes!

replied to chansung's post about 1 month ago
posted an update about 1 month ago
view post
Post
1865
Takara takes 3rd place in the {tech:munich} AI hackathon with Fudeno!

A little over 2 weeks ago @aldigobbler and I set out to create the largest MultiModal SVG dataset ever created, we succeeded in this and when I was in Munich, Germany I took it one step further and made an entire app with it!

We fine-tuned Mistral Small, made a Next.JS application and blew some minds, taking 3rd place out of over 100 hackers. So cool!

If you want to see the dataset, please see below.

takara-ai/fudeno-instruct-4M