Dataset Viewer
url
stringlengths 24
101
| description
stringlengths 169
383k
|
---|---|
https://github.com/Idov31/NovaHypervisor
|
NovaHypervisor
NovaHypervisor is a defensive x64 Intel host based hypervisor. The goal of this project is to protect against kernel based attacks (either via Bring Your Own Vulnerable Driver (BYOVD) or other means) by safeguarding defense products (AntiVirus / Endpoint Protection) and kernel memory structures and preventing unauthorized access to kernel memory.
Languages: C++ (91.1%), Assembly (5.1%), C (3.8%)
Images
Images
NovaClient
NovaClient
NovaHypervisor
NovaHypervisor
...
.gitattributes
.gitattributes
.gitignore
.gitignore
LICENSE.txt
LICENSE.txt
NovaHypervisor.sln
NovaHypervisor.sln
README.md
README.md
> README.md
# NovaHypervisor
<p align="center">
<img alt="Logo" src="./Images/logo_transparent.png" width="400" height="400">
</p>
  
## Description
NovaHypervisor is a defensive x64 Intel host based hypervisor. The goal of this project is to protect against kernel based attacks (either via Bring Your Own Vulnerable Driver (BYOVD) or other means) by safeguarding defense products (AntiVirus / Endpoint Protection) and kernel memory structures and preventing unauthorized access to kernel memory.
NovaHypervisor is written in C++ and Assembly, and is designed to be compatible with Hyper-V and run on Windows 10 and later versions. Please see the [setup](#setup) section for more information on how to use it.
> [!WARNING]
> This project is in a very early stage of development and is not yet ready for production use. It is intended for educational purposes and to demonstrate the concepts of a defensive hypervisor.
> The project has been tested on the latest Windows 10, and while it should work on Windows 11, it has not been tested on that version yet.
## Usage
To use the NovaHypervisor, you will need to create a kernel service and start it:
```cmd
sc create NovaHypervisor type= kernel binPath= "C:\Path\To\NovaHypervisor.sys"
sc start NovaHypervisor
```
Then, you can add and remove the addresses that you want to protect using the [NovaClient](./NovaClient/) application:
```cmd
REM Add an address to protect
NovaClient.exe protect 0x12345678 <r|w|x> <execution hook>
REM Remove an address from protection
NovaClient.exe unprotect 0x12345678
```
- protect: Protect a memory address from being accessed, you can specify the type of protection:
- `r`: Read protection
- `w`: Write protection
- `x`: Execute protection
The protection that you give is the protection that the address will **have**. For example, if you want to remove execute privileges, do "rw".
- unprotect: Remove protection from a memory address.
> [!NOTE]
> Execution hook via inline hook + EPT hooks are not supported and will not be supported for this project to prevent abuse.
## Setup
### Compiling the Project
The setup to compile the project requires you to have:
- Visual Studio 2022 or later.
- Windows Driver Kit (WDK) installed.
### Target setup
To run the hypervisor, you will need to have a Windows 10 or later version installed on your machine. You will also need to have:
- Intel VT-x enabled.
- Virtualized IOMMU.
## Logging and Debugging
### Logging
NovaHypervisor uses [WPP](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/wpp-software-tracing) logging as it provides easy to use interface that works also in VMX root. To be able to see the logs, make sure to create a trace session once:
```cmd
logman create trace "NovaHypervisorLogs" -p {e74c1035-77d4-4c5b-9088-77056fae3aa3} 0xffffffff 0xff -o C:\Path\To\NovaHypervisor.etl
```
Later on, whenever you want to start or end the logging session you can use:
```cmd
logman start "NovaHypervisorLogs"
logman stop "NovaHypervisorLogs"
```
To view the logs you can use tools such as [TraceView](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/traceview).
### Debugging
To test and debug it in your testing environment run those commands with elevated cmd and then restart your machine:
```cmd
bcdedit /set testsigning on
bcdedit /debug on
bcdedit /dbgsettings net hostip:<HOSTIP> port:55000 key:1.2.3.4
```
Where `<HOSTIP>` is the IP address of your host machine.
## Resources
[Hypervisor From Scratch](https://rayanfam.com/topics/hypervisor-from-scratch-part-1/)
[HyperDbg](https://github.com/HyperDbg/HyperDbg)
## Personal Thanks & Contributors
- [Sinaei](https://x.com/Intel80x86): For his help with answering questions I had and for his amazing work on HyperDbg and Hypervisor From Scratch.
- [memN0ps](https://github.com/memN0ps/): For his help with answering questions I had and pointing me to the right resources.
|
https://github.com/salykova/sgemm.cu
|
sgemm.cu
High-Performance SGEMM on CUDA devices
Languages: Cuda (79.8%), C++ (9.3%), C (5.0%), Shell (2.8%), CMake (1.7%), Python (1.4%)
assets
assets
common
common
scripts
scripts
src
src
...
.clang-format
.clang-format
.clangd
.clangd
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
> README.md
# High-Performance SGEMM on NVIDIA GPUs
> **Important note:** while the implementation is expected to be high-performant on all Ada/Ampere/Volta/Turing devices, it was specifically fine-tuned for and tested on NVIDIA RTX 3090 (GA102 chip - RTX 3080, A10, A40, A6000).
## Benchmark
>Avoid using WSL for performance measurements. To ensure accurate and reliable results, please use a native Linux environment.
To benchmark the code, specify compute capability of your CUDA device and run `benchmark.sh`. For example, on RTX 3090:
```bash
bash benchmark.sh 86
```
The benchmark settings such as minimum/maximum matrix sizes, step size, number of warm-up iterations etc. can be adjusted in the `benchmark.sh` file.
To visualize benchmark results, please install `matplotlib` and run
```bash
python plot_benchmark_data.py benchmark_results
```
## Tests
Use `test.sh` to test the implementation for correctness. For example, on RTX 3090:
```bash
bash test.sh 86
```
## Performance
Test environment:
- OS: Ubuntu 24.04.1 LTS
- GPU: NVIDIA RTX 3090
- Driver Version: 550.120
- CUDA Driver: 12.4, CUDA Runtime: 12.6, V12.6.85
- CMake 3.28.3
- g++ 13.3
<p align="center">
<img src="assets/perf.png" alt="perf" width="85%">
</p>
<p align="center">
<img src="assets/perf_locked.png" alt="perf" width="85%">
</p>
|
https://github.com/facebook/jemalloc
|
jemalloc
Meta fork of the OG Jemalloc project
Languages:
.github/workflows
.github/workflows
bin
bin
build-aux
build-aux
doc
doc
doc_internal
doc_internal
...
.appveyor.yml
.appveyor.yml
.autom4te.cfg
.autom4te.cfg
.cirrus.yml
.cirrus.yml
.clang-format
.clang-format
.git-blame-ignore-revs
.git-blame-ignore-revs
<no readme found>
|
https://github.com/OpenBMB/CPM.cu
|
CPM.cu
CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge techniques in sparse architecture, speculative sampling and quantization.
Languages: Cuda (49.7%), C++ (29.6%), Python (20.7%)
.github/ISSUE_TEMPLATE
.github/ISSUE_TEMPLATE
cpmcu
cpmcu
examples
examples
scripts
scripts
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE
LICENSE
README.md
README.md
README_ZH.md
README_ZH.md
> README.md
# CPM.cu
<strong>[中文版本](./README_ZH.md) | English</strong>
CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge techniques in **sparse architecture**, **speculative sampling** and **quantization**.
<div id="news"></div>
## 🔥 Project Updates
- [2025.06.06] Optimized for [MiniCPM4](https://github.com/openbmb/minicpm).
- Support InfLLM-v2 attention kernel
- Support sliding-window for the MTP layer, optimized for long context
- Support quantization for the MTP layer
- [2025.05.29] Support Quantization at [SpecMQuant](https://github.com/AI9Stars/SpecMQuant).
- Support Marlin GPTQ kernel for the LLM
- Support Speculative Sampling for quantized LLM
- [2025.03.01] Release the first version at [FR-Spec](https://github.com/thunlp/FR-Spec).
- SOTA Speculative Sampling Implementation
- Support FR-Spec: Frequency-Ranked Speculative Sampling
- Support Tree-based verification of Speculative Sampling in Flash-Attention
- Support Static memory management and memory reuse
- Support Fused kernels
- Support Chunked prefill
- Support CUDA Graph
<div id="demo"></div>
## Demo
https://github.com/user-attachments/assets/ab36fd7a-485b-4707-b72f-b80b5c43d024
<div id="getstart"></div>
## Getting Started
- [Installation](#install)
- [Model Weights](#modelweights)
- [Quick Start](#example)
- [OpenAI API Server](#openai-api)
<div id="install"></div>
## Installation
### Install from source
This library's build depends on torch and ninja. Please install both before installing this library.
```bash
git clone https://github.com/OpenBMB/CPM.cu.git --recursive
cd CPM.cu
pip install .
```
If you encounter installation issues, please follow the error messages to resolve them or create a GitHub issue. You can use `python setup.py --help-config` to view more installation configuration options.
<div id="modelweights"></div>
## Prepare Model
Please follow [MiniCPM4's README](https://github.com/openbmb/minicpm) to download the model weights.
<div id="example"></div>
## Quick Start
We provide a simple example to show how to use CPM.cu to generate text.
```bash
cd examples
python3 minicpm4/test_generate.py --prompt-file <your prompt file>
```
If you don't specify the model path, the scripts will load the model from OpenBMB's Hugging Face repository.
If you want to use local paths, we recommend keeping all model filenames unchanged and placing them in the same directory. This way, you can run the model by specifying the directory with the -p parameter. Otherwise, we suggest modifying the paths in the code accordingly.
You can use --help to learn more about the script's features.
We also provide a script, `examples/long_prompt_gen.py`, to generate long code summarization.
This script automatically collects code from this repository and prompts the model to "Summarize the code."
```bash
cd examples
python3 long_prompt_gen.py # generate prompt.txt (for more details, use --help)
python3 minicpm4/test_generate.py --prompt-file ../prompt.txt
```
The output should be of the following format:
```bash
Generated text (streaming output):
--------------------------------------------------
Prefilling: 100.0% (106850/106850 tokens) @ 6565.3 tokens/s - Complete!
<Generated Output HERE>
==================================================
Stream Generation Summary:
==================================================
Prefill length: 106850
Prefill time: 16.36 s
Prefill tokens/s: 6530.77
Mean accept length: 2.50
Decode length: 118
Decode time: 0.76 s
Decode tokens/s: 154.59
```
Where:
- the `Prefill` and `Decode` speed are output by (length, time and token/s).
- the `Mean accept length` is the average length of the accepted tokens when using Speculative Sampling.
<div id="openai-api"></div>
## OpenAI API Server (experimental)
Start the OpenAI-compatible API server (same args as `examples/minicpm4/test_generate.py`):
```bash
cd examples
python minicpm4/start_server.py [options]
```
Test the API (supports streaming and non-streaming modes):
```bash
cd examples
python test_openai_api.py [--no-stream]
```
Only `/v1/chat/completions` is supported and the `model` field is ignored.
## Code Structure
```bash
CPM.cu/
├── src/
│ ├── flash_attn/ # attention kernels: sparse, tree-verification, etc.
│ ├── model/
│ │ ├── minicpm4/ # minicpm4 model
│ │ ├── w4a16_gptq_marlin/ # marlin kernel
│ │ └── ... # common layers
│ ├── entry.cu # pybind: bind cuda and python
│ └── ...
├── cpmcu/ # python interface
└── ...
```
## More
### Word Frequency File Generation
We provide a word frequency generation script for FR-Spec, located at "scripts/fr_spec/gen_fr_index.py". You can run it as follows:
```bash
python scripts/fr_spec/gen_fr_index.py --model_path <your_model_path>
```
You can modify the code to use your own dataset. If your task is in a specific vertical domain, constructing word frequencies tailored to that domain can significantly improve processing speed.
### GPTQ to Marlin Conversion
We provide a script to convert GPTQ-quantized model to Marlin format, located at "scripts/model_convert/gptq2marlin.py". You can run it as follows:
```bash
python scripts/model_convert/gptq2marlin.py \
--src <gptq_model_path> \
--dst <marlin_model_path>
```
This script supports MiniCPM, Llama and EAGLE format. It will automatically detect the model type and perform the appropriate conversion.
## Acknowledgments
Our `src/flash_attn` folder modified based on [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/v2.6.3/csrc/flash_attn).
We have drawn inspiration from the following repositories:
- [EAGLE](https://github.com/SafeAILab/EAGLE)
- [Block-Sparse-Attention](https://github.com/mit-han-lab/Block-Sparse-Attention)
- [vLLM](https://github.com/vllm-project/vllm)
- [SGLang](https://github.com/sgl-project/sglang)
## Citation
Please cite our paper if you find our work valuable.
```
@article{zhao2025fr,
title={FR-Spec: Accelerating Large-Vocabulary Language Models via Frequency-Ranked Speculative Sampling},
author={Zhao, Weilin and Pan, Tengyu and Han, Xu and Zhang, Yudi and Sun, Ao and Huang, Yuxiang and Zhang, Kaihuo and Zhao, Weilun and Li, Yuxuan and Wang, Jianyong and others},
journal={arXiv preprint arXiv:2502.14856},
year={2025}
}
@article{zhang2025specmqaunt,
title={Speculative Decoding Meets Quantization: Compatibility Evaluation and Hierarchical Framework Design},
author={Zhang, Yudi and Zhao, Weilin and Han, Xu and Zhao, Tiejun and Xu, Wang and Cao, Hailong and Zhu, Conghui},
journal={arXiv preprint arXiv:2505.22179},
year={2025}
}
@article{minicpm4,
title={MiniCPM4: Ultra-Efficient LLMs on End Devices},
author={MiniCPM},
year={2025}
}
|
https://github.com/amd/MxGPU-Virtualization
|
MxGPU-Virtualization
Languages: C (98.5%), C++ (1.0%), Python (0.3%), CSS (0.1%), Makefile (0.1%), M4 (0.0%)
dkms
dkms
gim-coms-lib
gim-coms-lib
gim_shim
gim_shim
libgv
libgv
package
package
...
.gitignore
.gitignore
Kconfig
Kconfig
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
> README.md
# GIM
## What is GIM?
[GIM](https://github.com/amd/MxGPU-Virtualization#) (GPU-IOV Module) is a Linux kernel module for AMD SR-IOV based HW Virtualization (MxGPU) product. It supports KVM based hypervisors with necessary kernel compatibility layer. GIM is reponsible for:
* GPU IOV initialization
* Virtual function configuration and enablement
* GPU scheduling for world switch
* Hang detection and virtual function level reset (FLR)
* PF/VF hand shake and other GPU utilities.
## DOCUMENTATION:
Please check out our [User Guide](https://instinct.docs.amd.com/projects/virt-drv/en/latest/) for instructions on how to set up GIM and example configurations to run SR-IOV enabled VMs.
## Hardware/Features supported:
Please check the latest [release note](https://github.com/amd/MxGPU-Virtualization/releases).
|
https://github.com/ramseymcgrath/PCILeechFWGenerator
|
PCILeechFWGenerator
Automatically generates custom pcileech firmware from real pcie devices. Supports behavior inspection, advanced customization options and multiple profiles.
Languages: Python (73.9%), Jinja (9.6%), HTML (8.7%), SystemVerilog (5.3%), Shell (1.2%), C (0.6%)
.github
.github
.vscode
.vscode
_site
_site
boards
boards
configs/devices
configs/devices
...
.coveragerc
.coveragerc
.dockerignore
.dockerignore
.gitignore
.gitignore
.pre-commit-config.yaml
.pre-commit-config.yaml
.readthedocs.yml
.readthedocs.yml
> README.md
# PCILeech Firmware Generator
[](https://github.com/ramseymcgrath/PCILeechFWGenerator/actions)
[](https://codecov.io/gh/ramseymcgrath/PCILeechFWGenerator)

Generate authentic PCIe DMA firmware from real donor hardware with a single command. This tool extracts donor configurations from a local device and generates unique PCILeech FPGA bitstreams (and optionally flashes a DMA card over USB-JTAG).
> [!WARNING]
> This tool requires *real* hardware. The templates are built using the device identifiers directly from a donor card and placeholder values are explicitly avoided. Using your own donor device ensures your firmware will be unique.
## ✨ Key Features
- **Donor Hardware Analysis**: Extract real PCIe device configurations and register maps from live hardware via VFIO
- **Dynamic Device Capabilities**: Generate realistic network, storage, media, and USB controller capabilities with pattern-based analysis
- **Full 4KB Config-Space Shadow**: Complete configuration space emulation with BRAM-based overlay memory
- **MSI-X Table Replication**: Exact replication of MSI-X tables from donor devices with interrupt delivery logic
- **Deterministic Variance Seeding**: Consistent hardware variance based on device serial number for unique firmware
- **Advanced SystemVerilog Generation**: Comprehensive PCIe device controller with modular template architecture
- **Active Device Interrupts**: MSI-X interrupt controller with timer-based and event-driven interrupt generation
- **Memory Overlay Mapping**: BAR dispatcher with configurable memory regions and custom PIO windows
- **Interactive TUI**: Modern Textual-based interface with real-time device monitoring and guided workflows
- **Containerized Build Pipeline**: Podman-based synthesis environment with automated VFIO setup
- **Automated Testing and Validation**: Comprehensive test suite with SystemVerilog assertions and Python unit tests
- **USB-JTAG Flashing**: Direct firmware deployment to DMA boards via integrated flash utilities
📚 **[Complete Documentation](https://pcileechfwgenerator.ramseymcgrath.com)** | 🏗️ **[Device Cloning Guide](https://pcileechfwgenerator.ramseymcgrath.com/device-cloning)** | ⚡ **[Dynamic Capabilities](https://pcileechfwgenerator.ramseymcgrath.com/dynamic-device-capabilities)** | 🔧 **[Development Setup](https://pcileechfwgenerator.ramseymcgrath.com/development)**
## 🚀 Quick Start
### Installation
```bash
# Install with TUI support (recommended)
pip install pcileechfwgenerator[tui]
# Load required kernel modules
sudo modprobe vfio vfio-pci
```
### Requirements
- **Python ≥ 3.9**
- **Donor PCIe card** (any inexpensive NIC, sound, or capture card)
- **Linux OS** (You need this)
### Optional Requirements
- **Podman** (_not Docker_ - required for proper PCIe device mounting) You use podman or run the python locally. *You must use linux for either option
- **DMA board** (pcileech_75t484_x1, pcileech_35t325_x4, or pcileech_100t484_x1) You don't need to flash your firmware with this tooling but you can.
- **Vivado Studio** (2022.2+ for synthesis and bitstream generation) You can use a locally generated Vivado project or insert the files into an existing one.
### Basic Usage
```bash
# Interactive TUI (recommended for first-time users)
sudo python3 pcileech.py tui
# CLI interface for scripted builds
sudo python3 pcileech.py build --bdf 0000:03:00.0 --board pcileech_35t325_x1
# CLI build with custom Vivado settings
sudo python3 pcileech.py build --bdf 0000:03:00.0 --board pcileech_35t325_x1 \
--vivado-path /tools/Xilinx/2025.1/Vivado --vivado-jobs 8 --vivado-timeout 7200
# Check VFIO configuration
sudo python3 pcileech.py check --device 0000:03:00.0
# Flash firmware to device
sudo python3 pcileech.py flash output/firmware.bin
```
> [!NOTE]
> The legacy entrypoint has been removed, please see the steps above and update your scripts accordingly
### Development from Repository
```bash
git clone https://github.com/ramseymcgrath/PCILeechFWGenerator.git
cd PCILeechFWGenerator
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
sudo -E python3 pcileech.py tui
```
## 🔧 Troubleshooting
### VFIO Setup Issues
> [!WARNING]
> Avoid using on-board devices (audio, graphics cards) for donor info. The VFIO process can lock the bus during extraction and cause system reboots.
The most common issues involve VFIO (Virtual Function I/O) configuration. Use the built-in diagnostic tool:
```bash
# Check VFIO setup and device compatibility
sudo python3 pcileech.py check
# Check a specific device
sudo python3 pcileech.py check --device 0000:03:00.0
# Interactive mode with guided fixes
sudo python3 pcileech.py check --interactive
# Attempt automatic fixes
sudo python3 pcileech.py check --fix
```
### Common VFIO Problems
**1. IOMMU not enabled in BIOS/UEFI**
```bash
# Enable VT-d (Intel) or AMD-Vi (AMD) in BIOS settings
# Then add to /etc/default/grub GRUB_CMDLINE_LINUX:
# For Intel: intel_iommu=on
# For AMD: amd_iommu=on
sudo update-grub && sudo reboot
```
**2. VFIO modules not loaded**
```bash
sudo modprobe vfio vfio_pci vfio_iommu_type1
```
**3. Device not in IOMMU group**
```bash
# Check IOMMU groups
find /sys/kernel/iommu_groups/ -name '*' -type l | grep YOUR_DEVICE_BDF
```
**4. Permission issues**
```bash
# Add user to required groups
sudo usermod -a -G vfio $USER
sudo usermod -a -G dialout $USER # For USB-JTAG access
```
### Installation Issues
```bash
# If pip installation fails
pip install --upgrade pip setuptools wheel
pip install pcileechfwgenerator[tui]
# For TUI dependencies
pip install textual rich psutil watchdog
# Container issues
podman --version
podman info | grep rootless
```
> [!NOTE]
> If you run into issues with your vivado project file formatting, first clear out all your cached files and rerun. Otherwise try pulling a copy of the pcileech repo directly and then inserting the generator output in.
## 📚 Documentation
For detailed information, please visit our **[Documentation Site](https://pcileechfwgenerator.ramseymcgrath.com)**:
- **[Device Cloning Process](https://pcileechfwgenerator.ramseymcgrath.com/device-cloning)** - Complete guide to the cloning workflow
- **[Firmware Uniqueness](https://pcileechfwgenerator.ramseymcgrath.com/firmware-uniqueness)** - How authenticity is achieved
- **[Manual Donor Dump](https://pcileechfwgenerator.ramseymcgrath.com/manual-donor-dump)** - Step-by-step manual extraction
- **[Development Setup](https://pcileechfwgenerator.ramseymcgrath.com/development)** - Contributing and development guide
- **[TUI Documentation](https://pcileechfwgenerator.ramseymcgrath.com/tui-readme)** - Interactive interface guide
- **[Config space info](https://pcileechfwgenerator.ramseymcgrath.com/config-space-shadow)** - Config space shadow info
## 🧹 Cleanup & Safety
- **Rebind donors**: Use TUI/CLI to rebind donor devices to original drivers
- **Keep firmware private**: Generated firmware contains real device identifiers
- **Use isolated build environments**: Never build on production systems
- **Container cleanup**: `podman rmi pcileechfwgenerator:latest`
> [!IMPORTANT]
> This tool is intended for educational research and legitimate PCIe development purposes only. Users are responsible for ensuring compliance with all applicable laws and regulations. The authors assume no liability for misuse of this software.
## 🏆 Acknowledgments
- **PCILeech Community**: For feedback and contributions
- @Simonrak for the writemask implementation
## 📄 License
This project is licensed under the Apache License - see the [LICENSE](LICENSE) file for details.
## ⚠️ Legal Notice
*AGAIN* This tool is intended for educational research and legitimate PCIe development purposes only. Users are responsible for ensuring compliance with all applicable laws and regulations. The authors assume no liability for misuse of this software.
**Security Considerations:**
- Never build firmware on systems used for production or sensitive operations
- Use isolated build environments (Seperate dedicated hardware)
- Keep generated firmware private and secure
- Follow responsible disclosure practices for any security research
- Use the SECURITY.md template to raise security concerns
---
|
https://github.com/allkern/iris
|
iris
Experimental PlayStation 2 emulator
Languages: C (61.5%), C++ (37.7%)
.github
.github
frontend
frontend
res
res
shaders
shaders
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
AppImage.cmake
AppImage.cmake
CMakeLists.txt
CMakeLists.txt
Info.plist
Info.plist
> README.md
<div align="center" text-align="center" width="100%">
<img width="55%" src="https://github.com/user-attachments/assets/d59e2d95-5791-4497-9985-442ca5115ac6">
</div>
# 🐣 Iris
Experimental Sony PlayStation 2 emulator and debugger
## Screenshots
<div align="center" class="grid" markdown>
<img width="45%" src="https://github.com/user-attachments/assets/39106951-9d45-484f-b4ae-13197305bf06"/>
<img width="45%" src="https://github.com/user-attachments/assets/e7d24d24-ccac-4239-baba-80d880db35bf"/>
<img width="45%" src="https://github.com/user-attachments/assets/3d2499fd-304e-4f2c-a1ce-677912f13753"/>
<img width="45%" src="https://github.com/user-attachments/assets/de37505e-efea-4d3a-94fe-3438b2e9722b"/>
<img width="45%" src="https://github.com/user-attachments/assets/d97b16fe-f59f-4174-97eb-f4dadf4c4df0"/>
<img width="45%" src="https://github.com/user-attachments/assets/f061db57-96f3-4fad-94ea-8b023a5875ad"/>
<img width="45%" src="https://github.com/user-attachments/assets/5ac202f5-eb74-493f-bb35-c6acf752a50b"/>
<img width="45%" src="https://github.com/user-attachments/assets/099ddda9-4f7f-4d8d-8071-40741bbd3bfc"/>
</div>
## Usage
> [!WARNING]
> This emulator is under development, most games WILL run at very low/unplayable framerates.
Iris has a graphical user interface and also supports launching from the command line:
```
Usage: iris [OPTION]... <path-to-disc-image>
-b, --bios Specify a PlayStation 2 BIOS dump file
--rom1 Specify a DVD player dump file
--rom2 Specify a ROM2 dump file
-d, --boot Specify a direct kernel boot path
-i, --disc Specify a path to a disc image file
-x, --executable Specify a path to an ELF executable to be
loaded on system startup
--slot1 Specify a path to a memory card file to
be inserted on slot 1
--slot2 Specify a path to a memory card file to
be inserted on slot 2
-h, --help Display this help and exit
-v, --version Output version information and exit
```
Launching a game or executable through the GUI is also very easy, you can either go to Iris > Open... and pick a disc image or ELF executable, or just drop a file into Iris' window to launch it!
## Building
> [!WARNING]
> Building requires CMake on all supported platforms
### Linux
Building on Linux requires installing SDL3 dependencies and FUSE if you wish to generate AppImages.
```
sudo apt update
sudo apt upgrade
sudo add-apt-repository universe
sudo apt-get install build-essential git make \
pkg-config cmake ninja-build gnome-desktop-testing libasound2-dev libpulse-dev \
libaudio-dev libjack-dev libsndio-dev libx11-dev libxext-dev \
libxrandr-dev libxcursor-dev libxfixes-dev libxi-dev libxss-dev libxtst-dev \
libxkbcommon-dev libdrm-dev libgbm-dev libgl1-mesa-dev libgles2-mesa-dev \
libegl1-mesa-dev libdbus-1-dev libibus-1.0-dev libudev-dev libfuse2t64
```
Then just clone the repository and run CMake:
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build
cmake --build build -j8
```
Optionally run `cmake --install build` to generate an AppImage.
### Windows
We currently only support GCC as a compiler on Windows, this is because MSVC doesn't have an inline assembler, which we need to embed resources into the executable. This might eventually be fixed though!
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build -G "MinGW Makefiles"
cmake --build build -j8
```
### macOS
Iris finally got working macOS builds!
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build
cmake --build build -j8
```
Optionally run `sudo cmake --install build` to generate a macOS App Bundle
## Progress
### Commercial games
Booting a small number of commercial games in-game, and a slightly bigger set of games can boot to the title screen. Most of them do nothing though, an the ones that do usually run way too slow to be playable.
### BIOS
Pretty much all BIOSes I've tried work just fine, even some obscure ones like the Chinese BIOS and the PSX DESR BIOS (more on this later).
It is also possible to specify paths to ROM1 (DVD player) and ROM2 (Chinese extensions, required for the Chinese BIOS).
## PSX DESR
Support for the PSX DESR console is early but somewhat functional. The DESR BIOS plays the boot animation but later fails some sort of diagnostic test. The DESR requires Flash, ATA and MagicGate emulation, which Iris doesn't yet support.
Booting to the XMB should be possible once these features are implemented, and is one of my medium-term goals for this project.
If you want to try it for yourself, you need to dump the BIOS out of your PSX console, then just clone the `desr` branch, build the emulator and set up the BIOS, ROM1 and ROM2 dumps in Settings > BIOS, or through the command line.
# Special thanks and acknowledgements
I would like to thank the emudev Discord server, Ziemas, Nelson (ncarrillo), cakehonolulu, PSI-rockin, noumi and the PCSX2 team for their kind support.
This project makes use of ImGui, gl3w, toml++, Portable File Dialogs and stb_image
### Components
This console is significantly more complex compared to the PS1, here's a rough list of components:
```
🟡 EE (R5900) CPU
- 🟡 FPU
- 🟡 MMI (SIMD)
- 🟡 TLB
- 🟡 DMAC
- 🟢 INTC
- 🟡 Timers
- 🟢 GIF
- 🟡 GS
- 🟡 VU0
= 🟡 Macro mode
= 🟡 Micro mode
= 🟡 VIF0
- 🟡 VU1 (always micro mode)
= 🟡 VIF1
- 🟡 IPU
🟢 IOP (R3000) CPU
- 🟡 DMAC
- 🟢 INTC
- 🟡 Timers
- 🟢 CDVD
- 🟢 SIO2 (controllers and Memory Cards)
- 🟢 SPU2
- 🟡 DEV9
- 🟡 USB/FireWire?
- 🔴 Ethernet
- 🔴 PS1 backcompat (PS1 hardware)
🟢 SIF
```
|
https://github.com/x86matthew/WinVisor
|
WinVisor
WinVisor - A hypervisor-based emulator for Windows x64 user-mode executables using Windows Hypervisor Platform API
Languages: C++ (73.9%), C (26.1%)
Common
Common
WinVisor
WinVisor
WinVisorDLL
WinVisorDLL
x64/release
x64/release
...
LICENSE
LICENSE
README.md
README.md
WinVisor.sln
WinVisor.sln
winvisor_screenshot.png
winvisor_screenshot.png
> README.md
# WinVisor
## Overview
In Windows 10 (version RS4), Microsoft introduced the Windows Hypervisor Platform (WHP) API. This API exposes Microsoft's built-in hypervisor functionality to user-mode Windows applications. In 2024, I used this API to create another project: a 16-bit MS-DOS emulator called DOSVisor. This project takes the concept further, and allows Windows x64 executables to be emulated within a virtualized environment.
The WHP API allows applications to create a virtual CPU, and map virtual memory from the host process directly into the guest's physical memory. The emulator uses this functionality to build a virtual environment which contains everything needed to execute a Windows user-mode process. This involves building up the memory space within the guest, including mapping the target executable and all DLL dependencies, followed by populating other internal data structures such as the `PEB`, `TEB`, `KUSER_SHARED_DATA`, etc.
Mapping the EXE and DLL dependencies into memory is a simple task, but accurately maintaining internal structures, such as the PEB, is more complex. These structures are large, mostly undocumented, and their contents can vary between Windows versions. Instead of manually building up the memory layout within the virtual environment, WinVisor launches a suspended instance of the target process and clones the entire address space into the guest. The IAT and TLS data directories are temporarily removed from the PE headers in memory to stop DLL dependencies from loading and to prevent TLS callbacks from executing before reaching the entry point. The process is then resumed, allowing the usual process initialization to continue until it reaches the entry point of the target executable, at which point the hypervisor launches and takes control.
As the WHP API only allows memory from the current process to be mapped into the guest, the main hypervisor logic is encapsulated within a DLL that gets injected into the target process.
At the present time, the emulator simply forwards all syscalls to the host OS and logs them to the console. However, the project provides a framework to easily facilitate syscall hooks if necessary.
## Usage
WinVisor has some limitations in its current form - the biggest one being that it currently only supports virtualizing a single thread. Other examples are described in further detail in the **Limitations** section below.
Despite these limitations, it still works well with many executables. It has been tested successfully against built-in Windows executables such as `cmd.exe`, `ping.exe`, and even GUI applications such as `mspaint.exe` and `notepad.exe` (although these only run partially virtualized as described later).
To launch WinVisor, simply execute the following command:
`WinVisor.exe <target_executable_path>`
Command-line parameters can also be specified for the target application, for example:
`WinVisor.exe c:\windows\system32\ping.exe 8.8.8.8`
If `[ERROR] Failed to initialise Windows Hypervisor Platform API` is displayed, please ensure that `Windows Hypervisor Platform` is installed and enabled in "Windows Features".

*(screenshot above shows WinVisor emulating `cmd.exe` within a virtualized environment)*
## Virtual CPU
The emulator creates a virtual CPU via WHP to execute the target binary. The virtual CPU operates almost exclusively in CPL3 (user-mode), except for a small bootloader that runs at CPL0 (kernel-mode) to initialize the CPU state before execution. The initialization process involves setting up the following aspects:
- Control registers (`CR0`, `CR3`, `CR4`, `XCR0`)
- MSRs (`MSR_EFER`, `MSR_LSTAR`, `MSR_STAR`, `MSR_GS_BASE`)
- GDT
- IDT
- TSS
- Initial segment selectors and register values
- Paging table (4-layer)
Once the initial CPU state has been set up, it switches to CPL3 via a `SYSRET` instruction and begins executing the target application.
The emulator handles both `SYSCALL` instructions and legacy (`INT 2E`) syscalls. To catch system calls performed via the `SYSCALL` instruction, the `MSR_LSTAR` value is set to a reserved placeholder address. This placeholder address exists in kernel space, ensuring that no conflicts occur with real user-mode memory within the process. When the virtual CPU attempts to execute the `SYSCALL` instruction, a page fault exception is generated, causing a VM exit which indicates to the host that a syscall is pending.
Legacy interrupt-based syscalls are handled in a very similar way. The IDT is pre-populated with a range of placeholder handler addresses, causing a VM exit when an interrupt occurs. As the placeholder addresses are unique, the host can easily calculate which interrupt type is pending. In the case of legacy syscalls, an internal wrapper is used to proxy these calls to the same handler that is used by the `SYSCALL` instruction, before returning cleanly via `IRETQ`.
## Memory Paging
As mentioned earlier, the emulator creates a child process, and all virtual memory within that process is mapped directly into the guest using the same address layout. A paging table is used to map virtual addresses to the corresponding physical pages.
Instead of mapping the entire address space of the process upfront, a fixed number of physical pages are allocated for the guest. The emulator contains a very basic memory manager, and pages are mapped "on demand". When a page fault occurs, the requested page will be paged in, and execution resumes. If all page "slots" are full, the oldest entry is swapped out to make room for the new one.
In addition to using a fixed number of currently-mapped pages, the emulator also uses a fixed-size page table. The size of the page table is determined by calculating the maximum possible number of tables (`PML4`, `PDPT`, `PD`, `PT`) for the amount of mapped page entries. This model results in a simple and consistent physical memory layout but comes at the cost of efficiency. In fact, the paging tables take up more space than the actual page entries. However, as the emulator functions well even with a small number of allocated pages, this level of overhead is not a major concern.
## Limitations
**Single-thread only**
The emulator currently only supports virtualizing a single thread. If the target executable creates additional threads, they will be executed natively. To support multiple threads, a pseudo-scheduler could be developed to handle this in the future.
The Windows parallel loader is disabled to ensure all module dependencies are loaded by a single thread.
**Software exceptions**
Virtualized software exceptions are not currently supported. If an exception occurs, the system will call the `KiUserExceptionDispatcher` function natively within the target process as usual.
**Safety issues**
There are several ways to "escape" the VM, such as simply creating a new process/thread, scheduling APC calls, etc. Windows GUI-related syscalls can also make nested calls directly back into user-mode from the kernel, which would currently bypass the hypervisor layer. For this reason, GUI executables such as `notepad.exe` are only partially virtualized when run under WinVisor at this time.
**Shared host memory**
As the WinVisor host DLL is injected into the target process, it exists within the same virtual address space as the target executable in the guest. This means the code running within the virtual CPU is able to directly access the memory within the host hypervisor module, and could potentially corrupt it.
**Non-executable guest memory**
While the virtual CPU is set up to support NX, all memory regions are currently mirrored into the guest with full RWX access.
## Further Reading
This project is described in further detail in the following article: https://www.elastic.co/security-labs/winvisor-hypervisor-based-emulator
During development, I came across a similar project called [Simpleator](https://github.com/ionescu007/Simpleator) by Alex Ionescu. His project also utilizes the WHP API to emulate Windows x64 binaries, but is implemented in a very different way.
|
https://github.com/Taccel-Simulator/Taccel
|
Taccel
Taccel: Scaling-up Vision-based Tactile Robotics with High-performance GPU Simulation
Languages: Cuda (99.7%)
assets
assets
examples
examples
ptx
ptx
taccel
taccel
thirdparty/warp
thirdparty/warp
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
pyproject.toml
pyproject.toml
requirements.txt
requirements.txt
> README.md
# Taccel: Scaling-up Vision-based Tactile Robotics with High-performance GPU Simulation
[**Yuyang Li**](https://yuyangli.com)<sup>1,2 *</sup>,
[**Wenxin Du**](https://dwxrycb123.github.io/)<sup>3 *</sup>,
[**Chang Yu**](https://changyu.io/)<sup>3 *</sup>,
[**Puhao Li**](https://xiaoyao-li.github.io)<sup>2</sup>,
[**Zihang Zhao**](https://zihangzhao.com/)<sup>1</sup>,
[**Tengyu Liu**](https://tengyu.ai)<sup>2</sup>,
[**Chenfanfu Jiang**](https://www.math.ucla.edu/~cffjiang/)<sup>3 †</sup>,
[**Yixin Zhu**](https://yzhu.io)<sup>1 †</sup>,
[**Siyuan Huang**](https://siyuanhuang.com)<sup>2 †</sup>
<sup>1</sup> Institute for AI, PKU
<sup>2</sup> State Key Lab of General AI, BIGAI
<sup>3</sup> AIVC Lab, UCLA
<sup>*</sup> Equal Contributor
<sup>†</sup> Corresponding Author
[📄 [Paper](https://taccel-simulator.github.io/assets/taccel-paper.pdf) ]
[📘 [Docs](https://taccel-simulator.github.io) ]
[🛠️ [Code](https://github.com/Taccel-Simulator/Taccel) ]
[📊 Data (Coming Soon) ]
If you use Taccel in your research, please use the following citation:
```bibtex
@article{li2025taccel,
title={Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation},
author={Li, Yuyang and Du, Wenxin and Yu, Chang and Li, Puhao and Zhao, Zihang and Liu, Tengyu and Jiang, Chenfanfu and Zhu, Yixin and Huang, Siyuan},
journal={arXiv preprint arXiv:2504.12908},
year={2025}
}
```
|
https://github.com/Friends-Security/RedirectThread
|
RedirectThread
Playing around with Thread Context Hijacking. Building more evasive primitives to use as alternative for existing process injection techniques
Languages: C++ (93.1%), C (4.7%), PowerShell (1.6%), CMake (0.6%)
AlertableThreadsForDays
AlertableThreadsForDays
ETWThreadCreationNoise
ETWThreadCreationNoise
RedirectThread
RedirectThread
ShellcodeExamples
ShellcodeExamples
...
.gitattributes
.gitattributes
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
README.md
README.md
> README.md
# RedirectThread
This tool explores various techniques for remote code execution and thread manipulation on Windows, originating from the `CONTEXT` struct.
For a detailed explanation of the research and techniques, please refer to our blog post: **[New Process Injection Class: The CONTEXT-Only Attack Surface](https://blog.fndsec.net/2025/05/16/the-context-only-attack-surface/)**
## TL;DR
Most process injection techniques follow a familiar pattern:
allocate → write → execute.
In this research, we ask: what if we skip allocation and writing entirely?
By focusing on execution-only primitives, we found distinct approaches to inject code without allocating / writing memory:
* Inject a DLL using only `LoadLibraryA`.
* Call arbitrary WinAPI functions with parameters using `SetThreadContext`, without suspending a thread.
* Utilize only `NtCreateThread` to remotely allocate, write and execute shellcode.
* Expand the technique to APC functions such as `QueueUserAPC`.
This isn’t classic thread hijacking — we don’t necessarily suspend/resume a thread mid-execution to overwrite it.
## Projects Included
This solution contains the following main projects:
* **`RedirectThread`**: A tool demonstrating various remote thread injection techniques utilizing the `CONTEXT` struct while avoiding allocating / writing memory remotely (and some ROP gadgets).
* **`AlertableThreadsForDays`**: A utility for creating alertable threads, for testing with APC-based injection methods.
## Usage
```
Usage: C:\RedirectThread.exe [options]
Required Options:
--pid <pid> Target process ID to inject into
--inject-dll Perform DLL injection (hardcoded to "0.dll")
--inject-shellcode <file> Perform shellcode injection from file
--inject-shellcode-bytes <hex> Perform shellcode injection from hex string (e.g. 9090c3)
Delivery Method Options:
--method <method> Specify code execution method
CreateRemoteThread Default, creates a remote thread
NtCreateThread Uses NtCreateThread (less traceable)
QueueUserAPC Uses QueueUserAPC (requires --tid)
QueueUserAPC2 Uses QueueUserAPC2 (requires --tid)
NtQueueApcThread Uses NtQueueApcThread (requires --tid)
NtQueueApcThreadEx Uses NtQueueApcThreadEx (requires --tid)
NtQueueApcThreadEx2 Uses NtQueueApcThreadEx2 (requires --tid)
Context Method Options:
--context-method <method> Specify context manipulation method
rop-gadget Default, uses ROP gadget technique
two-step Uses a two-step thread hijacking approach
Additional Options:
--tid <tid> Target thread ID (required for APC methods)
--alloc-size <size> Memory allocation size in bytes (default: 4096)
--alloc-perm <hex> Memory protection flags in hex (default: 0x40)
--alloc-address <hex> Specify base address for allocation (hex, optional)
--use-suspend Use thread suspension for increased reliability
--verbose Enable verbose output
--enter-debug Pause execution at key points for debugger attachment
Example:
C:\RedirectThread.exe --pid 1234 --inject-dll mydll.dll
C:\RedirectThread.exe --pid 1234 --inject-shellcode payload.bin --verbose
C:\RedirectThread.exe --pid 1234 --inject-shellcode payload.bin --method NtCreateThread
C:\RedirectThread.exe --pid 1234 --inject-shellcode-bytes 9090c3 --method QueueUserAPC --tid 5678
C:\RedirectThread.exe --pid 1234 --inject-shellcode-bytes $bytes --context-method two-step --method NtQueueUserApcThreadEx2 --tid 5678
```
## Building the Project
You can build this project using either CMake or Visual Studio directly with the provided solution file (`RedirectThread.sln`).
### Option 1: Using CMake
This project can be built using CMake. You can either use CMake from the command line (if CMake is installed and in your system's PATH) or leverage the CMake Tools extension if you are using Visual Studio Code.
#### Prerequisites
* A C++ compiler that supports C++17 (e.g., MSVC, GCC, Clang).
* CMake (version 3.10 or higher).
#### Build Steps
The following steps describe building with CMake from the command line. If you are using the CMake Tools extension in VSCode, you can often perform the configuration and build steps through the extension's UI instead of running these commands manually.
1. **Clone the repository:**
```bash
git clone <repository-url>
cd RedirectThread
```
2. **Create a build directory and navigate into it:**
```bash
mkdir build
cd build
```
3. **Configure the project with CMake:**
* For Visual Studio (example for Visual Studio 2019, 64-bit):
```bash
cmake .. -G "Visual Studio 16 2019" -A x64
```
* For Makefiles (example):
```bash
cmake ..
```
* For other generators, please refer to CMake documentation.
4. **Build the project:**
* For Visual Studio:
```bash
cmake --build . --config Release
```
* For Makefiles:
```bash
make
```
Executables will typically be located in a subdirectory within your build folder (e.g., `build/Release` or `build/RedirectThread/Release`).
### Option 2: Using Visual Studio Solution File
1. Open `RedirectThread.sln` in Visual Studio.
2. Select the desired build configuration (e.g., Release, x64).
3. Build the solution (Build > Build Solution).
Executables will be located in the respective project output directories (e.g., `x64/Release`).
|
https://github.com/dipampaul17/KVSplit
|
KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Languages: Python (73.2%), Shell (26.8%)
.github/workflows
.github/workflows
models
models
patch
patch
plots
plots
results
results
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
perplexity_test_data.txt
perplexity_test_data.txt
> README.md
<div align="center">
# 🚀 KVSplit
**Differentiated KV Cache Quantization for Apple Silicon**
[](https://github.com/dipampaul17/KVSplit/stargazers)
[](LICENSE)
[]()
<img src="./plots/kv_cache_memory_usage.png" alt="KV Cache Memory Usage" width="70%">
</div>
## 📌 Overview
Run **larger context windows** and **heavier LLMs** on your Mac by applying different quantization precision to keys vs values in the attention mechanism's KV cache. KVSplit enables you to:
- **Reduce memory usage by up to 72%** with minimal quality loss
- **Run 2-3x longer contexts** in the same memory budget
- **Maintain or improve inference speed** compared to FP16
- **Optimize for Apple Silicon** with full Metal support
## Key Findings
| Configuration | VRAM @ 8K tokens | Tokens/sec | Perplexity Change |
|---------------|-----------------|------------|-------------------|
| FP16 (base) | 176.00 MB (100%)| 54,360 | -- |
| K8V8 (8-bit) | 93.50 MB (47%) | 51,503 | +0.03% |
| **K8V4** | **71.50 MB (41%)** | **57,438** | **+0.86%** |
| K4V8 | 71.50 MB (41%) | 58,690 | +6.06% |
| K4V4 (4-bit) | 49.50 MB (28%) | 55,193 | +6.15% |
### Memory Savings by Sequence Length
| Configuration | 128 tokens | 2048 tokens | 4096 tokens | 8192 tokens |
|---------------|------------|-------------|-------------|-------------|
| FP16 (baseline) | 5.50 MB | 44.00 MB | 88.00 MB | 176.00 MB |
| K8V8 (8-bit) | 2.92 MB | 23.38 MB | 46.75 MB | 93.50 MB |
| K8V4 (mixed) | 2.23 MB | 17.88 MB | 35.75 MB | 71.50 MB |
| K4V8 (mixed) | 2.23 MB | 17.88 MB | 35.75 MB | 71.50 MB |
| K4V4 (4-bit) | 1.55 MB | 12.38 MB | 24.75 MB | 49.50 MB |
## Features
- Independent quantization of keys and values in the KV cache
- Optimized for Apple Silicon with Metal support
- Comprehensive benchmarking suite with perplexity measurement
- Memory usage and performance analysis tools
- Publication-quality visualization tools
- Easy setup and usage
## Prerequisites
- macOS (tested on Apple Silicon)
- Homebrew package manager
- Xcode Command Line Tools
## ⚡ Flexible Installation
```bash
# Clone the repository
git clone https://github.com/dipampaul17/KVSplit.git
cd kvsplit
# Run the installer script
chmod +x scripts/install_kvsplit.sh
./scripts/install_kvsplit.sh
```
The installer provides flexible options:
### 🐍 Python Setup Options
- **Virtual Environment** (default): Creates a standalone Python environment in the project folder
- **System Python**: Uses your existing Python installation instead of creating a virtual environment
- **Skip Python Setup**: For users who prefer to manage their Python environment manually
### 🔄 llama.cpp Integration Options
- **Standard Method** (default): Clones llama.cpp and applies the KV split patch
- **Git Submodule Method**: Adds llama.cpp as a git submodule (ideal for advanced users or development)
The installer will:
- Set up the project structure with your preferred configuration
- Configure llama.cpp with Metal support optimized for Apple Silicon
- Enable differentiated KV cache quantization
- Offer to download a small test model (optional)
- Set up visualization tools based on your Python preferences
## 🏎️ Quick Comparison
Want to see the benefits immediately? Run a quick comparison with your model:
```bash
# Run quick comparison with different configurations
python scripts/quick_compare.py --model models/your-model.gguf
```
This will show you a side-by-side comparison of FP16, K8V8, K8V4, K4V8, and K4V4 with memory usage, speed, and quality metrics.
## 📊 Impressive Results
<div align="center">
<img src="./plots/memory_vs_quality.png" alt="Memory vs Quality" width="50%">
</div>
### 📉 Memory Reduction
| Configuration | VRAM @ 8K tokens | Memory Savings | Quality Impact |
|---------------|-----------------|----------------|----------------|
| FP16 (base) | 176.00 MB | — | — |
| K8V8 (8-bit) | 93.50 MB | 47% | +0.03% |
| **K8V4** | **71.50 MB** | **59%** | **+0.86%** |
| K4V8 | 71.50 MB | 59% | +6.06% |
| K4V4 (4-bit) | 49.50 MB | 72% | +6.15% |
### 📈 Performance Impact
Using KVSplit doesn't just save memory—it often **improves inference speed** by 5-15%!
| Configuration | Tokens/sec (8K ctx) | Speedup vs FP16 |
|---------------|---------------------|----------------|
| FP16 | 54,360 | — |
| K8V8 | 51,503 | -5.3% |
| **K8V4** | **57,438** | **+5.7%** |
| K4V8 | 58,690 | +8.0% |
| K4V4 | 55,193 | +1.5% |
## 🧠 Project Structure
```
kvsplit/
├── llama.cpp/ # Optimized llama.cpp build
├── models/ # LLM model files
├── scripts/ # Utility scripts
│ ├── benchmark_kvsplit.py # Comprehensive benchmark tool
│ ├── install_kvsplit.sh # One-command installer
│ ├── quick_compare.py # Quick comparison utility
│ ├── capture_memory.sh # GIF creation for memory visualization
│ └── visualize_results.py # Generate publication-quality plots
├── results/ # Benchmark results (CSV/JSON)
├── plots/ # Generated visualizations
└── README.md # This file
```
## 🔬 Scientific Insight
<div align="center">
<img src="./plots/configuration_summary.png" alt="Configuration Summary" width="80%">
</div>
KV cache memory is dominated by storing key and value vectors for each token. Our research has revealed a critical insight: **keys are significantly more sensitive to quantization than values**.
### 🔑 Key Findings
- **Asymmetric Impact**: Keys require higher precision than values for maintaining quality
- **Sweet Spot**: K8V4 (8-bit keys, 4-bit values) provides optimal balance
- Only 0.86% perplexity degradation vs. FP16
- 59% memory reduction
- Faster inference than FP16
- **Confirmation**: K4V8 configuration shows 7x more quality degradation than K8V4, despite using the same total bits
This asymmetry allows for more efficient memory usage without compromising model quality, enabling longer context windows and larger models on consumer hardware.
## 💻 Usage Examples
### Running with Different KV Cache Precisions
```bash
# Baseline (FP16)
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn
# ⭐ RECOMMENDED: 8-bit keys, 4-bit values (K8V4)
# Best balance of quality and memory savings
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq 8
# 4-bit keys, 8-bit values (K4V8)
# Shows why key precision matters more than value precision
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq-key 4 --kvq-val 8
# 4-bit keys and values (K4V4)
# Maximum memory savings (72% reduction) with acceptable quality
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq 4
```
### Long Context Example (32K)
```bash
# Run with a 32K context (would require ~1.4GB in FP16, only ~400MB with K8V4)
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf \
-c 32768 -n 4096 -t 8 --flash-attn --kvq 8 \
-f your-long-document.txt
```
### 🚩 Command-Line Arguments
| Flag | Description | Recommendation |
|------|-------------|---------------|
| `-t 8` | Number of threads | 8 is optimal for most Apple Silicon chips |
| `--flash-attn` | Enables optimized attention | Recommended for Apple Silicon |
| `--kvq N` | Sets both key and value bits to N | Use `--kvq 8` for K8V4 configuration |
| `--kvq-key N` | Sets key bits only | Key precision has major quality impact |
| `--kvq-val N` | Sets value bits only | Value precision has minor quality impact |
| `-c N` | Context size in tokens | Longer contexts benefit more from KVSplit |
| `-n N` | Number of tokens to generate | Adjust based on your needs |
| `-f FILE` | Input file | For processing documents |
| `-m MODEL` | Model path | Path to your .gguf model file |
## 📏 Advanced Benchmarking
For comprehensive performance analysis, use our full benchmark suite:
```bash
# Run the full benchmark suite (all configurations and sequence lengths)
python scripts/benchmark_kvsplit.py
# Run a specific configuration test
python scripts/benchmark_kvsplit.py --config K8V4 --seq-len 4096
# Generate publication-quality visualizations
python scripts/visualize_results.py
```
The benchmarking script provides thorough measurements of:
- 📊 **Memory Usage**: VRAM and KV cache specifically
- ⚡ **Performance**: Tokens per second across different sequence lengths
- 🎯 **Quality**: Perplexity measurement using llama-perplexity
- 📈 **Scaling**: How memory usage and performance scale with sequence length
Results are saved in CSV/JSON formats with automatic summary statistics, and the visualization script generates publication-quality plots showing key insights.
## License
MIT
## 🎬 Visual Memory Savings
You can visualize memory savings with our capture tool:
```bash
# Capture memory reduction in Activity Monitor
./scripts/capture_memory.sh
```
<div align="center">
<table>
<tr>
<td><img src="./plots/kv_cache_memory_usage.png" alt="Memory Usage" width="100%"></td>
<td><img src="./plots/key_value_sensitivity.png" alt="Key-Value Sensitivity" width="100%"></td>
</tr>
<tr>
<td><img src="./plots/perplexity_change.png" alt="Quality Impact" width="100%"></td>
<td><img src="./plots/inference_speed.png" alt="Speed Impact" width="100%"></td>
</tr>
</table>
</div>
## 🍎 Apple Silicon Optimization
- **Metal Performance**: Fully optimized for Apple's Metal framework
- **Memory Efficiency**: Critical for memory-constrained M series Apple silicon devices
- **Activity Monitor**: Use our `capture_memory.sh` script to visualize real-time memory reductions
- **Alignment**: 256B page alignment in llama.cpp means actual memory savings might differ slightly from theoretical calculations
## ⭐ Key Features
- **Differentiated Precision**: Independent key and value bit precision (K8V4, K4V8, etc)
- **Apple Silicon Optimization**: Full Metal support for M1/M2/M3/M4 chips
- **Comprehensive Benchmarking**: Memory, speed, and quality metrics
- **Publication-Quality Visualization**: Beautiful plots for analysis
- **Simple User Interface**: One-command install and quick comparison tools
- **Memory Visualization**: Tools to capture and visualize memory savings
## 🙏 Acknowledgments
This project implements ideas from recent research including:
- "More for Keys, Less for Values: Adaptive KV Cache Quantization" (2024)
- "Unifying KV Cache Compression for Large Language Models with LeanKV" (2025)
Additional credits:
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - Base implementation
- [TinyLlama](https://huggingface.co/TinyLlama) - Test model
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## 🧠 Configuration Recommendations
- **Best Overall**: 🌟 **K8V4** 🌟 (8-bit keys, 4-bit values)
- 59% memory reduction with only 0.86% quality loss
- Improved inference speed (+5.7% vs FP16)
- Great balance of quality and efficiency
- **Absolute Maximum Memory Savings**: K4V4 (4-bit keys and values)
- 72% memory reduction with ~6% quality loss
- Good for memory-constrained devices
- Acceptable for less sensitive applications
- **Best for Very Long Contexts**: K8V4 or K4V4
- Memory savings compound with context length
- Run 2-3x longer contexts in the same memory budget
## 🔮 Future Roadmap
- [ ] **Adaptive Precision**: Dynamic precision based on token importance
- [ ] **Layer-Specific Quantization**: Different precision for different model layers
- [ ] **Model-Specific Optimizations**: Tailored for Mistral, Phi-3, etc.
- [ ] **Web Demo**: Interactive testing environment
- [ ] **Mobile Support**: Adapting for iOS and iPadOS
## 📜 License
MIT
## 🤝 Contributing
Contributions are welcome! Please open an issue or submit a pull request.
|
https://github.com/vivoblueos/kernel
|
kernel
Languages: Rust (96.2%), C (2.5%)
CREDITS
CREDITS
emballoc
emballoc
header
header
images
images
infra
infra
...
.gitignore
.gitignore
.licenserc.yaml
.licenserc.yaml
LICENSE
LICENSE
README.md
README.md
README_zh.md
README_zh.md
> README.md
<div align="center">
<img src="./images/logo.png" width="280" />
</div>
\[ English | [简体中文](README_zh.md) \]
# BlueOS Kernel
BlueOS kernel is written in Rust, featuring security, lightweight, and generality. It is compatible with POSIX interfaces and supports Rust's standard library.
## Technical Architecture
For details, please visit the BlueOS official website [kernel](https://blueos.vivo.com/kernel) page.
## Board Support
BlueOS kernel currently supports ARM32, ARM64, RISCV32 and RISCV64 chip architectures.
- QEMU platforms are supported for corresponding chip architectures.
- Hardware boards support is currently in progress.
## Repository Overview
| Repository Link | Description |
|----------------|-------------|
| apps | [Shell](https://github.com/vivoblueos/apps_shell) and [examples](https://github.com/vivoblueos/apps_example) developed based on Rust std |
| [book](https://github.com/vivoblueos/book) | Kernel technical documentation and tutorials, including detailed kernel development guides |
| [build](https://github.com/vivoblueos/build) | Project compilation build templates and scripts |
| [kernel](https://github.com/vivoblueos/kernel) | Core kernel repository, including CPU architecture support, system scheduler,sync primitives, async executor, memory management subsystem, file system, network subsystem, device subsystem, etc |
| [libc](https://github.com/vivoblueos/libc) | BlueOS kernel libc header files, forked from [rust-lang/libc](https://github.com/rust-lang/libc) |
| [librs](https://github.com/vivoblueos/librs) | BlueOS kernel libc implementation based on Rust programming language |
# Getting started with the kernel development
To build and work with the BlueOS kernel, please check following documentations.
- [Prepare basic build environment](https://github.com/vivoblueos/book/blob/main/src/getting-started.md)
- [Build customized Rust toolchain](https://github.com/vivoblueos/book/blob/main/src/build-rust-toolchain.md)
- [Work with the kernel](https://github.com/vivoblueos/book/blob/main/src/build-kernel.md)
# Technical Documentation
For more information about the BlueOS kernel, please refer to [the kernel book](https://github.com/vivoblueos/book).
|
https://github.com/iyush/COS
|
COS
Tiny x86_64 OS in C
Languages: C (92.7%), Assembly (2.3%), Linker Script (2.3%), Shell (2.2%)
kernel
kernel
userland
userland
...
.bochsrc
.bochsrc
.gitignore
.gitignore
README.md
README.md
build.sh
build.sh
debug.sh
debug.sh
> README.md
# COS
Tiny x86_64 Operating System written in C. The OS can:
1. Handle Interrupts.
2. Allocate Physical Memory
3. Load Executables (ELF).
4. Premptively Schedule tasks
5. Do syscalls
The OS does not currently have (but will at some point in the future):
1. Virtual Memory Manager (It is very simple atm).
2. Graphics Stack
3. Networking Stack
## Building
Make sure you have [nix](https://nixos.org/) installed. Make sure that you have pulled this repo recursively to pull limine. The current limine version supported is:
```
HEAD detached at origin/v7.x-binary
```
1. Pop into nix-shell
```
nix-shell
```
2. Build limine
```
cd limine
make
```
3. Build the OS and Userland
```
./build.sh
```
## Running
Run:
```
./run.sh
```
## Debugging
Run:
```
./debug.sh
```
|
https://github.com/google-ai-edge/LiteRT-LM
|
LiteRT-LM
Languages: C++ (88.2%), Starlark (7.8%), Python (4.0%)
.github/workflows
.github/workflows
prebuilt/android_arm64
prebuilt/android_arm64
python
python
runtime
runtime
schema
schema
...
.bazelrc
.bazelrc
.bazelversion
.bazelversion
.gitignore
.gitignore
BUILD
BUILD
BUILD.darts_clone
BUILD.darts_clone
> README.md
# LiteRT-LM
A C++ library to efficiently run language models across edge platforms.
## Description
Language models are no longer a single model but really a pipeline of models and
components working together. LiteRT-LM builds on top of
[LiteRT](https://github.com/google-ai-edge/LiteRT) to enable these pipelines
including:
* **C++ api** to efficiently run language models
* **Cross-Platform** support via portable C++ for broad deployment scenarios
* **Flexible** so you can customize for your specific feature
* **Hardware Acceleration** to unlock the full potential of your device's
hardware
### Status: Early Preview
Expect our first full release of LiteRT-LM late summer / early fall. We heard
the community feedback regarding Google AI Edge's Gemma 3n LiteRT preview. You
want access on more platforms, more visibility into the underlying stack, and
more flexibility. LiteRT-LM can help with all three.
### 🚀 What's New
* ***June 24, 2025*** **: Run Gemma models with NPU Support (`v0.7.0`)**
Unlock significant performance gains! Our latest release leverages the power
of Neural Processing Units (NPUs) on devices with Qualcomm and MediaTek
chipsets to run the Gemma3 1B model with incredible efficiency.
**Note:** LiteRT-LM NPU acceleration is only available through an Early
Access Program. Please check out
[this page](https://ai.google.dev/edge/litert/next/npu) for more information
about how to sign it up.
* ***June 10, 2025*** **: The Debut of LiteRT-LM: A New Framework for
On-Device LLMs** We're proud to release an early preview (`v0.6.1`) of the
LiteRT-LM codebase! This foundational release enables you to run the latest
Gemma series models across a wide range of devices with initial support for
CPU execution and powerful GPU acceleration on Android.
### Supported Backends & Platforms
Platform | CPU Support | GPU Support | NPU Support |
:----------- | :---------: | :-----------: | :-----------:
**Android** | ✅ | ✅ | ✅ |
**macOS** | ✅ | *Coming Soon* | - |
**Windows** | ✅ | *Coming Soon* | - |
**Linux** | ✅ | *Coming Soon* | - |
**Embedded** | ✅ | *Coming Soon* | - |
### Supported Models and Performance
Currently supported models during our Preview (as `.litertlm` format).
Model | Quantization | Context size | Model Size (Mb) | Download link
:---------- | :---------------: | :----------: | :-------------: | :-----------:
Gemma3-1B | 4-bit per-channel | 4096 | 557 | [download](https://huggingface.co/litert-community/Gemma3-1B-IT/blob/main/Gemma3-1B-IT_multi-prefill-seq_q4_ekv4096.litertlm)
Gemma3n-E2B | 4-bit per-channel | 4096 | 2965 | [download](https://huggingface.co/google/gemma-3n-E2B-it-litert-lm-preview)
Gemma3n-E4B | 4-bit per-channel | 4096 | 4235 | [download](https://huggingface.co/google/gemma-3n-E4B-it-litert-lm-preview)
Below are the performance numbers of running each model on various devices. Note
that the benchmark is measured with 1024 tokens prefill and 256 tokens decode (
with performance lock on Android devices).
| Model | Device | Backend | Prefill (tokens/sec) | Decode (tokens/sec) | Context size |
| :--- | :--- | :--- | :--- | :--- | :--- |
| Gemma3-1B | MacBook Pro<br>(2023 M3) | CPU | 422.98 | 66.89 | 4096 |
| Gemma3-1B | Samsung S24<br>(Ultra) | CPU | 243.24 | 43.56 | 4096 |
| Gemma3-1B | Samsung S24<br>(Ultra) | GPU | 1876.5 | 44.57 | 4096 |
| Gemma3-1B | Samsung S25<br>(Ultra) | NPU | 5836.6 | 84.8 | 1280 |
| Gemma3n-E2B | MacBook Pro<br>(2023 M3) | CPU | 232.5 | 27.6 | 4096 |
| Gemma3n-E2B | Samsung S24<br>(Ultra) | CPU | 110.5 | 16.1 | 4096 |
| Gemma3n-E2B | Samsung S24<br>(Ultra) | GPU | 816.4 | 15.6 | 4096 |
| Gemma3n-E4B | MacBook Pro<br>(2023 M3) | CPU | 170.1 | 20.1 | 4096 |
| Gemma3n-E4B | Samsung S24<br>(Ultra) | CPU | 73.5 | 9.2 | 4096 |
| Gemma3n-E4B | Samsung S24<br>(Ultra) | GPU | 548.0 | 9.4 | 4096 |
## Quick Start
This guide provides the necessary steps to build and execute a Large Language
Model (LLM) on your device. Note that the LiteRT-LM runtime is designed to work
with models in the `.litertlm` format. You can find and download compatible
models in the
[Supported Models and Performance](#supported-models-and-performance) section.
**Want to try it out first?** Before proceeding with the full setup, you can use
the pre-built binary below to run the LiteRT-LM immediately:
- [Android Arm64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.android_arm64)
- [MacOS](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.macos_arm64)
- [Linux x86_64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.linux_x86_64)
- [Windows x86_64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.windows_x86_64.exe)
- [iOS Arm64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.ios_sim_arm64)
*Tip: you may have to explicitly approve the usage of pre-built binaries. For
example, in MacOS, you should go to **System Settings > Privacy & Security >
Security** to approve the binary. *
### Prerequisites
Before you begin, please ensure you have the following installed:
- **Git**: To clone the repository and manage versions.
- **Bazel (version 7.6.1)**: This project uses `bazel` as its build system.
#### Get the Source Code
Current stable branch tag: [](https://github.com/google-ai-edge/LiteRT-LM/releases/latest)
First, clone the repository to your local machine. We strongly recommend
checking out the latest stable release tag to ensure you are working with a
stable version of the code.
**Clone the repository:**
```
git clone git@github.com:google-ai-edge/LiteRT-LM.git
cd LiteRT-LM
```
**Fetch the latest tags from the remote repository:**
```
git fetch --tags
```
**Checkout the latest stable release ([](https://github.com/google-ai-edge/LiteRT-LM/releases/latest)):**
To start working, create a new branch from the stable tag. This is the
recommended approach for development.
```
git checkout -b <my-feature-branch> <release-tag, e.g. "v0.6.1">
```
You are now on a local branch created from the tag and ready to work.
#### Install Bazel
This project requires Bazel version **7.6.1**. You can skip this if you already
have it set up.
The easiest way to manage Bazel versions is to install it via
[Bazelisk](https://github.com/bazelbuild/bazelisk). Bazelisk will automatically
download and use the correct Bazel version specified in the project's
.bazelversion file.
Alternatively, you can install Bazel manually by following the official
installation [instructions](https://bazel.build/install) for your platform.
### Build and Run the Command Line Demo
**LiteRT-LM** allows you to deploy and run LLMs on various platforms, including
Android, Linux, MacOS, and Windows. `runtime/engine/litert_lm_main.cc` is a
[command line demo](#litert_lm_main) that shows how to initialize and interact
with the model.
Please check the corresponding section below depending on your target deployment
device and your development platform.
<details>
<summary><strong>Deploy to Windows</strong></summary>
Building on Windows requires several prerequisites to be installed first.
#### Prerequisites
1. **Visual Studio 2022** - Install from Microsoft Store to get the MSVC
toolchain.
2. **Git for Windows** - Install from https://git-scm.com/download/win
(includes Git Bash needed for flatbuffer generation scripts).
3. **Python 3.11** - Install from Microsoft Store for Python dependencies.
4. **Bazel** - Install using Windows Package Manager (winget): `powershell
winget install --id=Bazel.Bazelisk -e`.
5. Download the `.litertlm` model from the
[Supported Models and Performance](#supported-models-and-performance)
section.
#### Building and Running
Once you've downloaded the `.litertlm` file, set the path for convenience:
```powershell
$Env:MODEL_PATH = "C:\path\to\your_model.litertlm"
```
Build the binary:
```powershell
# Build litert_lm_main for Windows.
bazelisk build //runtime/engine:litert_lm_main --config=windows
```
Run the binary (make sure you run the following command in **powershell**):
```powershell
# Run litert_lm_main.exe with a model .litertlm file.
bazel-bin\runtime\engine\litert_lm_main.exe `
--backend=cpu `
--model_path=$Env:MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to Linux / Embedded</strong></summary>
`clang` is used to build LiteRT-LM on linux. Build `litert_lm_main`, a CLI
executable and run models on CPU. Note that you should download the `.litertlm`
model from the
[Supported Models and Performance](#supported-models-and-performance) section.
Note that one can also deploy the model to Raspberry Pi using the same setup and
command in this section.
Once you've downloaded the `.litertlm` file, set the path for convenience:
```
export MODEL_PATH=<path to your .litertlm file>
```
Build the binary:
```
bazel build //runtime/engine:litert_lm_main
```
Run the binary:
```
bazel-bin/runtime/engine/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to MacOS</strong></summary>
Xcode command line tools include clang. Run `xcode-select --install` if not
installed before. Note that you should download the `.litertlm` model from the
[Supported Models and Performance](#supported-models-and-performance) section.
Once you've downloaded the `.litertlm` file, set the path for convenience:
```
export MODEL_PATH=<path to your .litertlm file>
```
Build the binary:
```
bazel build //runtime/engine:litert_lm_main
```
Run the binary:
```
bazel-bin/runtime/engine/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to Android</strong></summary>
To be able to interact with your Android device, please make sure you've
properly installed
[Android Debug Bridge](https://developer.android.com/tools/adb) and have a
connected device that can be accessed via `adb`.
**Note:** If you are interested in trying out LiteRT-LM with NPU acceleration,
please check out [this page](https://ai.google.dev/edge/litert/next/npu) for
more information about how to sign it up for an Early Access Program.
<details>
<summary><strong>Develop in Linux</strong></summary>
To be able to build the binary for Android, one needs to install NDK r28b or
newer from https://developer.android.com/ndk/downloads#stable-downloads.
Specific steps are:
- Download the `.zip` file from
https://developer.android.com/ndk/downloads#stable-downloads.
- Unzip the `.zip` file to your preferred location (say
`/path/to/AndroidNDK/`)
- Make `ANDROID_NDK_HOME` to point to the NDK directory. It should be
something like:
```
export ANDROID_NDK_HOME=/path/to/AndroidNDK/
```
*Tips: make sure your `ANDROID_NDK_HOME` points to the directory that has
`README.md` in it.*
With the above set up, let's try to build the `litert_lm_main` binary:
```
bazel build --config=android_arm64 //runtime/engine:litert_lm_main
```
</details>
<details>
<summary><strong>Develop in MacOS</strong></summary>
Xcode command line tools include clang. Run `xcode-select --install` if not
installed before.
To be able to build the binary for Android, one needs to install NDK r28b or
newer from https://developer.android.com/ndk/downloads#stable-downloads.
Specific steps are:
- Download the `.dmg` file from
https://developer.android.com/ndk/downloads#stable-downloads.
- Open the `.dmg` file and move the `AndroidNDK*` file to your preferred
location (say `/path/to/AndroidNDK/`)
- Make `ANDROID_NDK_HOME` to point to the NDK directory. It should be
something like:
```
export ANDROID_NDK_HOME=/path/to/AndroidNDK/AndroidNDK*.app/Contents/NDK/
```
*Tips: make sure your `ANDROID_NDK_HOME` points to the directory that has
`README.md` in it.*
With the above set up, let's try to build the `litert_lm_main` binary:
```
bazel build --config=android_arm64 //runtime/engine:litert_lm_main
```
</details>
After the binary is successfully built, we can now try to run the model on
device. Make sure you have the write access to the `DEVICE_FOLDER`:
In order to run the binary on your Android device, we have to push a few assets
/ binaries. First set your `DEVICE_FOLDER`, please make sure you have the write
access to it (typically you can put things under `/data/local/tmp/`):
```
export DEVICE_FOLDER=/data/local/tmp/
adb shell mkdir -p $DEVICE_FOLDER
```
To run with **CPU** backend, simply push the main binary and the `.litertlm`
model to device and run.
```
# Skip model push if it is already there
adb push $MODEL_PATH $DEVICE_FOLDER/model.litertlm
adb push bazel-bin/runtime/engine/litert_lm_main $DEVICE_FOLDER
adb shell $DEVICE_FOLDER/litert_lm_main \
--backend=cpu \
--model_path=$DEVICE_FOLDER/model.litertlm
```
To run with **GPU** backend, we need additional `.so` files. They are located in
the `prebuilt/` subfolder in the repo (we currently only support `arm64`).
```
# Skip model push if it is already there
adb push $MODEL_PATH $DEVICE_FOLDER/model.litertlm
adb push prebuilt/android_arm64/*.so $DEVICE_FOLDER
adb push bazel-bin/runtime/engine/litert_lm_main $DEVICE_FOLDER
adb shell LD_LIBRARY_PATH=$DEVICE_FOLDER \
$DEVICE_FOLDER/litert_lm_main \
--backend=gpu \
--model_path=$DEVICE_FOLDER/model.litertlm
```
Note that the first time a given model is loaded on a given device, it will take
longer to load. This is because the model weights are being arranged to run
optimally on your particular device's GPU. Subsequent loads will be faster
because the optimized weights are cached on your device.
</details>
### Command Line Demo Usage <span id="litert_lm_main"></span>
`litert_lm_main` is a command line demo for running and evaluating large
language models (LLMs) using our LiteRT [Engine/Session interface](#engine). It
provides basic functionalities as the following:
- generating text based on a user-provided prompt.
- executing the inference on various hardware backends, e.g. CPU / GPU.
- includes options for performance analysis, allowing users to benchmark
prefill and decoding speeds, as well as monitor peak memory consumption
during the run.
- supports both synchronous and asynchronous execution modes.
Below are a few example commands (please update accordingly when using `adb`):
**Run the model with default prompt**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
**Benchmark the model performance**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH \
--benchmark \
--benchmark_prefill_tokens=1024 \
--benchmark_decode_tokens=256 \
--async=false
```
*Tip: when benchmarking on Android devices, remember to use `taskset` to pin the
executable to the main core for getting the consistent numbers, e.g. `taskset
f0`.*
**Run the model with your prompt**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--input_prompt=\"Write me a song\"
--model_path=$MODEL_PATH
```
More detailed description about each of the flags are in the following table:
| Flag Name | Description | Default Value |
| :--- | :--- | :--- |
| `backend` | Executor backend to use for LLM execution (e.g., cpu, gpu). | `"gpu"` |
| `model_path` | Path to the `.litertlm` file for LLM execution. | `""` |
| `input_prompt` | Input prompt to use for testing LLM execution. | `"What is the tallest building in the world?"` |
| `benchmark` | Benchmark the LLM execution. | `false` |
| `benchmark_prefill_tokens` | If benchmark is true and this value is > 0, the benchmark will use this number to set the prefill tokens, regardless of the input prompt. If this is non-zero, `async` must be `false`. | `0` |
| `benchmark_decode_tokens` | If benchmark is true and this value is > 0, the benchmark will use this number to set the number of decode steps, regardless of the input prompt. | `0` |
| `async` | Run the LLM execution asynchronously. | `true` |
| `report_peak_memory_footprint` | Report peak memory footprint. | `false` |
## LiteRT-LM API <span id="engine"></span>
The LiteRT-LM provides a C++ API for executing Language Models. It is designed
around two primary classes: `Engine` and `Session`.
- The **`Engine`** is the main entry point. It's responsible for loading the
model and its associated resources (like the tokenizer) from storage and
preparing them for execution. It acts as a factory for creating `Session`
objects.
- The **`Session`** represents a single, stateful conversation or interaction
with the LLM. It holds the context (like conversation history) and provides
the methods to actually generate text. Each `Session` is an independent
instance, allowing for multiple interactions.
### Basic Workflow for Text-in-Text-out Inference
The typical lifecycle for using the runtime is:
1. **Create an `Engine`**: Initialize a single `Engine` with the model path and
configuration. This is a heavyweight object that holds the model weights.
2. **Create a `Session`**: Use the `Engine` to create one or more lightweight
`Session` objects.
3. **Generate Content**: Use a `Session` object to run inference, either
through a simple one-shot API or through more granular prefill/decode steps.
Below is the simplest way to generate text and is recommended for most use
cases. It mirrors
[Gemini text generation APIs](https://ai.google.dev/gemini-api/docs).
- `GenerateContent`: A blocking call that takes user input and returns the
complete model response.
- `GenerateContentStream`: A non-blocking call that streams the model's
response back token-by-token through an observer.
Example code snippet:
```cpp
#include "third_party/odml/litert_lm/runtime/engine/engine.h"
// ...
// 1. Define model assets and engine settings.
auto model_assets = ModelAssets::Create(model_path);
CHECK_OK(model_assets);
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::CPU);
// 2. Create the main Engine object.
absl::StatusOr<std::unique_ptr<Engine>> engine = Engine::CreateEngine(engine_settings);
CHECK_OK(engine);
// 3. Create a Session for a new conversation.
auto session_config = SessionConfig::CreateDefault();
absl::StatusOr<std::unique_ptr<Engine::Session>> session = (*engine)->CreateSession(session_config);
CHECK_OK(session);
// 4. Generate content using the high-level API.
absl::StatusOr<Responses> responses = (*session)->GenerateContent(
{InputText("What is the tallest building in the world?")});
CHECK_OK(responses);
// 5. Print the response.
std::cout << *responses << std::endl;
```
### Inference with GPU Backend
On Android, the runtime can pick GPU as the backend for inference instead of
CPU, by passing `litert::lm::Backend::GPU` in `EngineSettings::CreateDefault()`.
```cpp
// ...
// Set GPU as backend instead of CPU.
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::GPU);
// ...
```
When the engine is created, it looks for `libLiteRtGpuAccelerator.so` and
`libLiteRtTopKSampler.so` from the directories specified in `LD_LIBRARY_PATH`,
rpath in the app binary or default location by system dynamic linker. For
example, if an app binary and .so files are packaged in an APK by Android SDK,
.so files are unpacked by Android Package Manager where the app binary can find
them, i.e. under app's `/lib` directory.
### Advanced Control Over Prefill/Decode
This API provides fine-grained control over the two phases of transformer
inference: prefill and decode. This can be useful for advanced scenarios or
performance optimization.
- **Prefill**: The `RunPrefill` or `RunPrefillAsync` methods process the input
prompt and populate the model's internal state (KV cache).
- **Decode**: The `RunDecode` or `RunDecodeAsync` methods generate new tokens
one at a time based on the prefilled state.
Example code snippet:
```cpp
#include "third_party/odml/litert_lm/runtime/engine/engine.h"
// ...
// 1. Define model assets and engine settings.
auto model_assets = ModelAssets::Create(model_path);
CHECK_OK(model_assets);
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::CPU);
// 2. Create the main Engine object.
absl::StatusOr<std::unique_ptr<Engine>> engine = Engine::CreateEngine(engine_settings);
CHECK_OK(engine);
// 3. Create a Session for a new conversation.
auto session_config = SessionConfig::CreateDefault();
absl::StatusOr<std::unique_ptr<Engine::Session>> session = (*engine)->CreateSession(session_config);
CHECK_OK(session);
// 4. Prefill some prompts.
CHECK_OK((*session)->RunPrefill({InputText("What's the tallest building in the world?")}));
CHECK_OK((*session)->RunPrefill({InputText(" and what's the tallest building in the United States?")}));
// 5. Start decoding.
auto responses = (*session)->RunDecode();
// 6. Print the response.
std::cout << *responses << std::endl;
```
## FAQ
### LiteRT vs LiteRT-LM vs MediaPipe GenAI Tasks
LiteRT, LiteRT-LM, and MediaPipe GenAI Tasks are three libraries within the
Google AI Edge stack that build on each other. By exposing functionality at
different abstraction layers, we hope to enable developers to balance their
respective needs between flexibility and complexity.
[LiteRT](https://ai.google.dev/edge/litert) is Google AI Edge's underlying
on-device runtime. Developer can convert individual PyTorch, TensorFlow, and JAX
models to LiteRT and run them on-device.
**LiteRT-LM** gives developers the pipeline framework to stitch together
multiple LiteRT models with pre and post processing components (e.g. tokenizer,
vision encoder, text decoder).
[MediaPipe GenAI Tasks](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference)
are out-of-the-box native APIs (Kotlin, Swift, JS) to run langauge models by
just setting a few parameters such as temperature and topK.
### .litertlm vs .task
MediaPipe GenAI Tasks currently use `.task` files to represent language models.
Task files are a zip of multiple LiteRT files, components, and metadata.
`.litertlm` is an evolution of the `.task` file format to include additional
metadata and enable better compression.
During our LiteRT-LM preview, we will release a small number of `.litertlm`
files. MediaPipe APIs will continue to use `.task` files. Once we have the
first full release of LiteRT-LM, we will migrate MediaPipe APIs to use the new
`.litertlm` files and release a wider collection of `.litertlm` files on the
[LiteRT Hugging Face Community](https://huggingface.co/litert-community)
## Reporting Issues
If you encounter a bug or have a feature request, we encourage you to use the
[GitHub Issues](https://github.com/google-ai-edge/LiteRT-LM/issues/new) page to
report it.
Before creating a new issue, please search the existing issues to avoid
duplicates. When filing a new issue, please provide a clear title and a detailed
description of the problem, including steps to reproduce it. The more
information you provide, the easier it will be for us to help you.
|
https://github.com/Foreseerr/TScale
|
TScale
Languages: C++ (69.1%), Cuda (28.2%), C (2.1%)
cfg
cfg
code
code
doc
doc
fo
fo
img
img
...
.gitignore
.gitignore
Dockerfile
Dockerfile
LICENSE
LICENSE
README.md
README.md
test.cfg
test.cfg
> README.md
# TScale
This repo contains transformer train and inference code written in C++ and CUDA.
TScale is designed to run on consumer hardware. To achive best results it features
- Optimized transformer architecture with faster convergence and ~2x reduced attention costs
- Support for fp8 and int8 model weights and activations precision
- Optimized for consumer nVidia GPUs including fast reduced precision training without sacrificing model quality
- CPU offload reduces GPU memory requirements for training
- Sync distributed training on several same config hosts
- 1-bit gradient compression allowing using regular ethernet links for interconnect
- Async distributed training on arbitrary hosts with negligible network traffic. In this mode training can be run on geographically separated hosts
# Distributed training of 1.5B model on consumer GPU
By using inexpensive GPUs and async distributed mode TScale trains LLMs fast and affordable. Log loss for the 1.5B model trained on fineweb-edu for 2 days and $500 on several spot instances with 4090:

# Training your own 1T model at home
1T model size sounds beyond reach for most people and even organisations. However if we consider creative ways to count model size then there is nothing impossible. In this case we build a model with 1T index which we lookup for every token to make prediction with much smaller model. In terms of logloss/perplexity this construction easily achieves stellar results. Index for fineweb-edu occupies about 1T of disk space. Training run of 125M model with this ~1T index achieves **x8** perplexity reduction:
|Model|Perplexity|
|-----|-|
|125M |19.02|
|125M + 1T index|2.28|
# Read more
[Training 125M model](doc/125M_model.md)
[Training 1.5B model](doc/1.5B_model.md)
[Training 1T (!) model in your kitchen](doc/1T_model.md)
[Async distributed train](doc/fed.md)
[Notes on model and compute precision](doc/precision.md)
[TScale transformer model](doc/model.md)
[Data indexing](doc/lm_search.md)
[Tokenizer](doc/tokenizer.md)
# Build
To build the the code CUDA v12.3 and C++ compiler are required, msvc for windows, cmake+clang for Linux. To support cross platform build files generation this repo uses [fo](doc/fo.md), lightweight solution/build files generator. To generate build files you need to compile fo/fo.cpp and run it with two arguments. First argument is root of source tree, second argument is directory to store build files to.
## Windows
```bash
D:\TScale>fo.exe code sln
```
Then open code.sln from d:\TScale\sln\code.sln.
## Linux
To compile TScale for linux you need to compile fo.cpp, generate CMakeLists.txt file, run cmake, run make.
```bash
~/TScale/fo$ clang++-18 fo.cpp -o fo
~/TScale/fo$ cd ..
~/TScale$ ./fo/fo code make.dir
~/TScale$ cd make.dir
~/TScale/make.dir$ cmake -D CMAKE_BUILD_TYPE=RelWithDebInfo .
~/TScale/make.dir$ make
```
# Get train data
Examples in the code use [enwik9](https://mattmahoney.net/dc/textdata.html) dataset and its truncacted version enwik8. Also Hugging Face hosted datasets openwebtext, ontocord/CulturaY, danasone/librusec are used in examples. To import them use [hf_import](/pysrc/hf_import/import.py).
# Train model
[gpt_train](/code/gpt/train) is used to train a model. It is controlled by the [train script](/doc/train_script.md) and [data script](/doc/data_script.md). Default scripts are stored in [main_gpt.cpp](/code/gpt/train/main_gpt.cpp). To load train script from file run gpt_train with '-d data_script.txt -s train_script.txt' arguments.
## quick run
Compile gpt-train. Run it in the root directory:
```bash
~/TScale$ ./make.dir/gpt-train
```
## sync distributed run
Currently training can be distributed only among pow2 number of worker hosts.
To start a worker process run gpt_train with '-w 10000' argument. 10000 specifies port number to use.
To run master process call net_train('worker.txt') function in train script. List worker IP addresses in the file provided to net_train().
## multiple GPU
To use multiple GPU devices set DEVICE_COUNT variable in train script to number of GPUs to use. For distributed runs DEVICE_COUNT is applied on each worker, heterogeneous configurations are not supported.
## scripts
Description of scripts used in training: [data script](doc/data_script.md), [train script](doc/train_script.md)
# Inference test
To try inferencing from the trained model you can use [gpt_infer](/code/gpt/infer). It runs basic http server on 11311 port and allows sampling continuations from the model. Current implementation is slow and designed for demonstration purposes only.
# License
MIT
|
https://github.com/ashvardanian/fork_union
|
fork_union
Low(est?)-latency OpenMP-style minimalistic scoped thread-pool designed for 'Fork-Join' parallelism in Rust and C++, avoiding memory allocations, mutexes, CAS-primitives, and false-sharing on the hot path 🍴
Languages: C++ (57.2%), Rust (32.2%), C (8.2%), CMake (2.3%), Python (0.1%)
.github/workflows
.github/workflows
.vscode
.vscode
c
c
cmake
cmake
include
include
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.cmake-format.py
.cmake-format.py
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
> README.md
# Fork Union 🍴
"Fork Union" is the low(est?)-latency [OpenMP](https://en.wikipedia.org/wiki/OpenMP)-style [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access)-aware minimalistic scoped thread-pool designed for 'Fork-Join' parallelism in C++, C, and Rust, avoiding × [mutexes & system calls](#locks-and-mutexes), × [dynamic memory allocations](#memory-allocations), × [CAS-primitives](#atomics-and-cas), and × [false-sharing](#) of CPU cache-lines on the hot path 🍴

Most "thread-pools" are not, in fact, thread-pools, but rather "task-queues" that are designed to synchronize a concurrent dynamically growing list of heap-allocated globally accessible shared objects.
In C++ terms, think of it as a `std::queue<std::function<void()>>` protected by a `std::mutex`, where each thread waits for the next task to be available and then executes it on some random core chosen by the OS scheduler.
All of that is slow... and true across C++, C, and Rust projects.
Short of OpenMP, practically every other solution has high dispatch latency and noticeable memory overhead.
OpenMP, however, is not ideal for fine-grained parallelism and is less portable than the C++ and Rust standard libraries.
This is where __`fork_union`__ comes in.
It's a C++ 17 library with C 99 and Rust bindings ([previously Rust implementation was standalone](#reimplementing-in-rust)).
It supports pinning nodes to specific NUMA nodes or individual CPU cores, making it much easier to ensure data locality and halving the latency of individual loads in Big Data applications.
## Basic Usage
__`Fork Union`__ is dead-simple to use!
There is no nested parallelism, exception handling, or "future promises"; they are banned.
The thread pool itself has a few core operations:
- `try_spawn` to initialize worker threads, and
- `for_threads` to launch a blocking callback on all threads.
Higher-level APIs for index-addressable tasks are also available:
- `for_n` - for individual evenly-sized tasks,
- `for_n_dynamic` - for individual unevenly-sized tasks,
- `for_slices` - for slices of evenly-sized tasks.
For additional flow control and tuning, following helpers are available:
- `sleep(microseconds)` - for longer naps,
- `terminate` - to kill the threads before the destructor is called,
- `unsafe_for_threads` - to broadcast a callback without blocking,
- `unsafe_join` - to block until the completion of the current broadcast.
On Linux, in C++, given the maturity and flexibility of the HPC ecosystem, it provides [NUMA extensions](#non-uniform-memory-access-numa).
That includes the `linux_colocated_pool` analog of the `basic_pool` and the `linux_numa_allocator` for allocating memory on a specific NUMA node.
Those are out-of-the-box compatible with the higher-level APIs.
Most interestingly, for Big Data applications, a higher-level `distributed_pool` class will address and balance the work across all NUMA nodes.
### Intro in Rust
A minimal example may look like this:
```rs
use fork_union as fu;
let mut pool = fu::spawn(2);
pool.for_threads(|thread_index, colocation_index| {
println!("Hello from thread # {} on colocation # {}", thread_index + 1, colocation_index + 1);
});
```
Higher-level APIs distribute index-addressable tasks across the threads in the pool:
```rs
pool.for_n(100, |prong| {
println!("Running task {} on thread # {}",
prong.task_index + 1, prong.thread_index + 1);
});
pool.for_slices(100, |prong, count| {
println!("Running slice [{}, {}) on thread # {}",
prong.task_index, prong.task_index + count, prong.thread_index + 1);
});
pool.for_n_dynamic(100, |prong| {
println!("Running task {} on thread # {}",
prong.task_index + 1, prong.thread_index + 1);
});
```
A safer `try_spawn_in` interface is recommended using the Allocator API.
A more realistic example may look like this:
```rs
use std::error::Error;
use fork_union as fu;
fn heavy_math(_: usize) {}
fn main() -> Result<(), Box<dyn Error>> {
let mut pool = fu::ThreadPool::try_spawn(4)?;
let mut pool = fu::ThreadPool::try_named_spawn("heavy-math", 4)?;
pool.for_n_dynamic(400, |prong| {
heavy_math(prong.task_index);
});
Ok(())
}
```
### Intro in C++
To integrate into your C++ project, either just copy the `include/fork_union.hpp` file into your project, add a Git submodule, or CMake.
For a Git submodule, run:
```bash
git submodule add https://github.com/ashvardanian/fork_union.git extern/fork_union
```
Alternatively, using CMake:
```cmake
FetchContent_Declare(
fork_union
GIT_REPOSITORY
https://github.com/ashvardanian/fork_union
)
FetchContent_MakeAvailable(fork_union)
target_link_libraries(your_target PRIVATE fork_union::fork_union)
```
Then, include the header in your C++ code:
```cpp
#include <fork_union.hpp> // `basic_pool_t`
#include <cstdio> // `stderr`
#include <cstdlib> // `EXIT_SUCCESS`
namespace fu = ashvardanian::fork_union;
int main() {
fu::basic_pool_t pool;
if (!pool.try_spawn(std::thread::hardware_concurrency())) {
std::fprintf(stderr, "Failed to fork the threads\n");
return EXIT_FAILURE;
}
// Dispatch a callback to each thread in the pool
pool.for_threads([&](std::size_t thread_index) noexcept {
std::printf("Hello from thread # %zu (of %zu)\n", thread_index + 1, pool.count_threads());
});
// Execute 1000 tasks in parallel, expecting them to have comparable runtimes
// and mostly co-locating subsequent tasks on the same thread. Analogous to:
//
// #pragma omp parallel for schedule(static)
// for (int i = 0; i < 1000; ++i) { ... }
//
// You can also think about it as a shortcut for the `for_slices` + `for`.
pool.for_n(1000, [](std::size_t task_index) noexcept {
std::printf("Running task %zu of 3\n", task_index + 1);
});
pool.for_slices(1000, [](std::size_t first_index, std::size_t count) noexcept {
std::printf("Running slice [%zu, %zu)\n", first_index, first_index + count);
});
// Like `for_n`, but each thread greedily steals tasks, without waiting for
// the others or expecting individual tasks to have same runtimes. Analogous to:
//
// #pragma omp parallel for schedule(dynamic, 1)
// for (int i = 0; i < 3; ++i) { ... }
pool.for_n_dynamic(3, [](std::size_t task_index) noexcept {
std::printf("Running dynamic task %zu of 1000\n", task_index + 1);
});
return EXIT_SUCCESS;
}
```
That's it.
For advanced usage, refer to the [NUMA section below](#non-uniform-memory-access-numa).
## Alternatives & Differences
Many other thread-pool implementations are more feature-rich but have different limitations and design goals.
- Modern C++: [`taskflow/taskflow`](https://github.com/taskflow/taskflow), [`progschj/ThreadPool`](https://github.com/progschj/ThreadPool), [`bshoshany/thread-pool`](https://github.com/bshoshany/thread-pool)
- Traditional C++: [`vit-vit/CTPL`](https://github.com/vit-vit/CTPL), [`mtrebi/thread-pool`](https://github.com/mtrebi/thread-pool)
- Rust: [`tokio-rs/tokio`](https://github.com/tokio-rs/tokio), [`rayon-rs/rayon`](https://github.com/rayon-rs/rayon), [`smol-rs/smol`](https://github.com/smol-rs/smol)
Those are not designed for the same OpenMP-like use cases as __`fork_union`__.
Instead, they primarily focus on task queuing, which requires significantly more work.
### Locks and Mutexes
Unlike the `std::atomic`, the `std::mutex` is a system call, and it can be expensive to acquire and release.
Its implementations generally have 2 executable paths:
- the fast path, where the mutex is not contended, where it first tries to grab the mutex via a compare-and-swap operation, and if it succeeds, it returns immediately.
- the slow path, where the mutex is contended, and it has to go through the kernel to block the thread until the mutex is available.
On Linux, the latter translates to ["futex"](https://en.wikipedia.org/wiki/Futex) ["system calls"](https://en.wikipedia.org/wiki/System_call), which is expensive.
### Memory Allocations
C++ has rich functionality for concurrent applications, like `std::future`, `std::packaged_task`, `std::function`, `std::queue`, `std::conditional_variable`, and so on.
Most of those, I believe, aren't unusable in Big-Data applications, where you always operate in memory-constrained environments:
- The idea of raising a `std::bad_alloc` exception when there is no memory left and just hoping that someone up the call stack will catch it is not a great design idea for any Systems Engineering.
- The threat of having to synchronize ~200 physical CPU cores across 2-8 sockets and potentially dozens of [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) nodes around a shared global memory allocator practically means you can't have predictable performance.
As we focus on a simpler ~~concurrency~~ parallelism model, we can avoid the complexity of allocating shared states, wrapping callbacks into some heap-allocated "tasks", and other boilerplate code.
Less work - more performance.
### Atomics and [CAS](https://en.wikipedia.org/wiki/Compare-and-swap)
Once you get to the lowest-level primitives on concurrency, you end up with the `std::atomic` and a small set of hardware-supported atomic instructions.
Hardware implements it differently:
- x86 is built around the "Total Store Order" (TSO) [memory consistency model](https://en.wikipedia.org/wiki/Memory_ordering) and provides `LOCK` variants of the `ADD` and `CMPXCHG`, which act as full-blown "fences" - no loads or stores can be reordered across it.
- Arm, on the other hand, has a "weak" memory model and provides a set of atomic instructions that are not fences, that match the C++ concurrency model, offering `acquire`, `release`, and `acq_rel` variants of each atomic instruction—such as `LDADD`, `STADD`, and `CAS` - which allow precise control over visibility and order, especially with the introduction of "Large System Extension" (LSE) instructions in Armv8.1.
In practice, a locked atomic on x86 requires the cache line in the Exclusive state in the requester's L1 cache.
This would incur a coherence transaction (Read-for-Ownership) if some other core had the line.
Both Intel and AMD handle this similarly.
It makes [Arm and Power much more suitable for lock-free programming](https://arangodb.com/2021/02/cpp-memory-model-migrating-from-x86-to-arm/) and concurrent data structures, but some observations hold for both platforms.
Most importantly, "Compare and Swap" (CAS) is a costly operation and should be avoided whenever possible.
On x86, for example, the `LOCK ADD` [can easily take 50 CPU cycles](https://travisdowns.github.io/blog/2020/07/06/concurrency-costs), being 50x slower than a regular `ADD` instruction, but still easily 5-10x faster than a `LOCK CMPXCHG` instruction.
Once contention rises, the gap naturally widens and is further amplified by the increased "failure" rate of the CAS operation, particularly when the value being compared has already changed.
That's why, for the "dynamic" mode, we resort to using an additional atomic variable as opposed to more typical CAS-based implementations.
### Alignment & False Sharing
The thread-pool needs several atomic variables to synchronize the state.
It those variables are located on the same cache line, they will be "falsely shared" between threads.
This means that when one thread updates one of the variables, it will invalidate the cache line in all other threads, causing them to reload it from memory.
This is a common problem, and the C++ standard recommends addressing it with `alignas(std::hardware_destructive_interference_size)` for your hot variables.
There are, however, caveats.
The `std::hardware_destructive_interference_size` is [generally 64 bytes on x86](https://stackoverflow.com/a/39887282), matching the size of a single cache line.
But in reality, on most x86 machines, [depending on the BIOS "spatial prefetcher" settings](https://www.techarp.com/bios-guide/cpu-adjacent-sector-prefetch/), will [fetch 2 cache lines at a time starting with Sandy Bridge](https://stackoverflow.com/a/72127222).
Because of these rules, padding hot variables to 128 bytes is a conservative but often sensible defensive measure adopted by Folly's `cacheline_align` and Java's `jdk.internal.vm.annotation.Contended`. 
## Pro Tips
### Non-Uniform Memory Access (NUMA)
Handling NUMA isn't trivial and is only supported on Linux with the help of the [`libnuma` library](https://github.com/numactl/numactl).
It provides the `mbind` interface to pin specific memory regions to particular NUMA nodes, as well as helper functions to query the system topology, which are exposed via the `fork_union::numa_topology` template.
Let's say you are working on a Big Data application, like brute-forcing Vector Search using the [SimSIMD](https://github.com/ashvardanian/simsimd) library on a 2 dual-socket CPU system, similar to [USearch](https://github.com/unum-cloud/usearch/pulls).
The first part of that program may be responsible for sharding the incoming stream of data between distinct memory regions.
That part, in our simple example will be single-threaded:
```cpp
#include <vector> // `std::vector`
#include <span> // `std::span`
#include <fork_union.hpp> // `linux_numa_allocator`, `numa_topology_t`, `linux_distributed_pool_t`
#include <simsimd/simsimd.h> // `simsimd_f32_cos`, `simsimd_distance_t`
namespace fu = ashvardanian::fork_union;
using floats_alloc_t = fu::linux_numa_allocator<float>;
constexpr std::size_t dimensions = 768; /// Matches most BERT-like models
static std::vector<float, floats_alloc_t> first_half(floats_alloc_t(0));
static std::vector<float, floats_alloc_t> second_half(floats_alloc_t(1));
static fu::numa_topology_t numa_topology;
static fu::linux_distributed_pool_t distributed_pool;
/// Dynamically shards incoming vectors across 2 nodes in a round-robin fashion.
void append(std::span<float, dimensions> vector) {
bool put_in_second = first_half.size() > second_half.size();
if (put_in_second) second_half.insert(second_half.end(), vector.begin(), vector.end());
else first_half.insert(first_half.end(), vector.begin(), vector.end());
}
```
The concurrent part would involve spawning threads adjacent to every memory pool to find the best `search_result_t`.
The primary `search` function, in ideal world would look like this:
1. Each thread finds the best match within its "slice" of a NUMA node, tracking the best distance and index in a local CPU register.
2. All threads in each NUMA node atomically synchronize using a NUMA-local instance of `search_result_t`.
3. The main thread collects aggregates of partial results from all NUMA nodes.
That is, however, overly complicated to implement.
Such tree-like hierarchical reductions are optimal in a theoretical sense. Still, assuming the relative cost of spin-locking once at the end of a thread scope and the complexity of organizing the code, the more straightforward path is better.
A minimal example would look like this:
```cpp
/// On each NUMA node we'll synchronize the threads
struct search_result_t {
simsimd_distance_t best_distance {std::numeric_limits<simsimd_distance_t>::max()};
std::size_t best_index {0};
};
inline search_result_t pick_best(search_result_t const& a, search_result_t const& b) noexcept {
return a.best_distance < b.best_distance ? a : b;
}
/// Uses all CPU threads to search for the closest vector to the @p query.
search_result_t search(std::span<float, dimensions> query) {
bool const need_to_spawn_threads = !distributed_pool.count_threads();
if (need_to_spawn_threads) {
assert(numa_topology.try_harvest() && "Failed to harvest NUMA topology");
assert(numa_topology.count_nodes() == 2 && "Expected exactly 2 NUMA nodes");
assert(distributed_pool.try_spawn(numa_topology, sizeof(search_result_t)) && "Failed to spawn NUMA pools");
}
search_result_t result;
fu::spin_mutex_t result_update; // ? Lighter `std::mutex` alternative w/out system calls
auto concurrent_searcher = [&](auto first_prong, std::size_t count) noexcept {
auto [first_index, _, colocation] = first_prong;
auto& vectors = colocation == 0 ? first_half : second_half;
search_result_t thread_local_result;
for (std::size_t task_index = first_index; task_index < first_index + count; ++task_index) {
simsimd_distance_t distance;
simsimd_f32_cos(query.data(), vectors.data() + task_index * dimensions, dimensions, &distance);
thread_local_result = pick_best(thread_local_result, {distance, task_index});
}
// ! We are spinning on a remote cache line... for simplicity.
std::lock_guard<fu::spin_mutex_t> lock(result_update);
result = pick_best(result, thread_local_result);
};
auto _ = distributed_pool[0].for_slices(first_half.size() / dimensions, concurrent_searcher);
auto _ = distributed_pool[1].for_slices(second_half.size() / dimensions, concurrent_searcher);
return result;
}
```
In a dream world, we would call `distributed_pool.for_n`, but there is no clean way to make the scheduling processes aware of the data distribution in an arbitrary application, so that's left to the user.
Calling `linux_colocated_pool::for_slices` on individual NUMA-node-specific colocated pools is the cheapest general-purpose recipe for Big Data applications.
For more flexibility around building higher-level low-latency systems, there are unsafe APIs expecting you to manually "join" the broadcasted calls, like `unsafe_for_threads` and `unsafe_join`.
Instead of hard-coding the `distributed_pool[0]` and `distributed_pool[1]`, we can iterate through them without keeping the lifetime-preserving handle to the passed `concurrent_searcher`:
```cpp
for (std::size_t colocation = 0; colocation < distributed_pool.colocations_count(); ++colocation)
distributed_pool[colocation].unsafe_for_threads(..., concurrent_searcher);
for (std::size_t colocation = 0; colocation < distributed_pool.colocations_count(); ++colocation)
distributed_pool[colocation].unsafe_join();
```
### Efficient Busy Waiting
Here's what "busy waiting" looks like in C++:
```cpp
while (!has_work_to_do())
std::this_thread::yield();
```
On Linux, the `std::this_thread::yield()` translates into a `sched_yield` system call, which means context switching to the kernel and back.
Instead, you can replace the `standard_yield_t` STL wrapper with a platform-specific "yield" instruction, which is much cheaper.
Those instructions, like [`WFET` on Arm](https://developer.arm.com/documentation/ddi0602/2025-03/Base-Instructions/WFET--Wait-for-event-with-timeout-), generally hint the CPU to transition to a low-power state.
| Wrapper | ISA | Instruction | Privileges |
| ------------------ | ------------ | ----------- | ---------- |
| `x86_yield_t` | x86 | `PAUSE` | R3 |
| `x86_tpause_1us_t` | x86+WAITPKG | `TPAUSE` | R3 |
| `arm64_yield_t` | AArch64 | `YIELD` | EL0 |
| `arm64_wfet_t` | AArch64+WFXT | `WFET` | EL0 |
| `riscv_yield_t` | RISC-V | `PAUSE` | U |
No kernel calls.
No futexes.
Works in tight loops.
## Performance
One of the most common parallel workloads is the N-body simulation ¹.
Implementations are available in both C++ and Rust in `scripts/nbody.cpp` and `scripts/nbody.rs`, respectively.
Both are lightweight and involve little logic outside of number-crunching, so both can be easily profiled with `time` and introspected with `perf` Linux tools.
Additional NUMA-aware Search examples are available in `scripts/search.rs`.
---
C++ benchmarking results for $N=128$ bodies and $I=1e6$ iterations:
| Machine | OpenMP (D) | OpenMP (S) | Fork Union (D) | Fork Union (S) |
| :------------- | ---------: | ---------: | -------------: | -------------: |
| 16x Intel SPR | 20.3s | 16.0s | 18.1s | 10.3s |
| 12x Apple M2 | ? | 1m:16.7s | 1m:30.3s ² | 1m:40.7s ² |
| 96x Graviton 4 | 32.2s | 20.8s | 39.8s | 26.0s |
Rust benchmarking results for $N=128$ bodies and $I=1e6$ iterations:
| Machine | Rayon (D) | Rayon (S) | Fork Union (D) | Fork Union (S) |
| :------------- | --------: | --------: | -------------: | -------------: |
| 16x Intel SPR | 51.4s | 38.1s | 15.9s | 9.8s |
| 12x Apple M2 | 3m:23.5s | 2m:0.6s | 4m:8.4s | 1m:20.8s |
| 96x Graviton 4 | 2m:13.9s | 1m:35.6s | 18.9s | 10.1s |
> ¹ Another common workload is "Parallel Reductions" covered in a separate [repository](https://github.com/ashvardanian/ParallelReductionsBenchmark).
> ² When a combination of performance and efficiency cores is used, dynamic stealing may be more efficient than static slicing.
You can rerun those benchmarks with the following commands:
```bash
cmake -B build_release -D CMAKE_BUILD_TYPE=Release
cmake --build build_release --config Release
time NBODY_COUNT=128 NBODY_ITERATIONS=1000000 NBODY_BACKEND=fork_union_static build_release/fork_union_nbody
time NBODY_COUNT=128 NBODY_ITERATIONS=1000000 NBODY_BACKEND=fork_union_dynamic build_release/fork_union_nbody
```
## Safety & Logic
There are only 3 core atomic variables in this thread-pool, and 1 for dynamically-stealing tasks.
Let's call every invocation of a `for_*` API - a "fork", and every exit from it a "join".
| Variable | Users Perspective | Internal Usage |
| :----------------- | :--------------------------- | :------------------------------------ |
| `stop` | Stop the entire thread-pool | Tells workers when to exit the loop |
| `fork_generation` | "Forks" called since init | Tells workers to wake up on new forks |
| `threads_to_sync` | Threads not joined this fork | Tells main thread when workers finish |
| `dynamic_progress` | Progress within this fork | Tells workers which jobs to take |
__Why don't we need atomics for "total_threads"?__
The only way to change the number of threads is to `terminate` the entire thread-pool and then `try_spawn` it again.
Either of those operations can only be called from one thread at a time and never coincide with any running tasks.
That's ensured by the `stop`.
__Why don't we need atomics for a "job pointer"?__
A new task can only be submitted from one thread that updates the number of parts for each new fork.
During that update, the workers are asleep, spinning on old values of `fork_generation` and `stop`.
They only wake up and access the new value once `fork_generation` increments, ensuring safety.
__How do we deal with overflows and `SIZE_MAX`-sized tasks?__
The library entirely avoids saturating multiplication and only uses one saturating addition in "release" builds.
To test the consistency of arithmetic, the C++ template class can be instantiated with a custom `index_t`, such as `std::uint8_t` or `std::uint16_t`.
In the former case, no more than 255 threads can operate, and no more than 255 tasks can be addressed, allowing us to easily test every weird corner case of [0:255] threads competing for [0:255] tasks.
__Why not reimplement it in Rust?__
The original Rust implementation was a standalone library, but in essence, Rust doesn't feel designed for parallelism, concurrency, and expert Systems Engineering.
It enforces stringent safety rules, which is excellent for building trustworthy software, but realistically, it makes lock-free concurrent programming with minimal memory allocations too complicated.
Now, the Rust library is a wrapper over the C binding of the C++ core implementation.
## Testing and Benchmarking
To run the C++ tests, use CMake:
```bash
cmake -B build_release -D CMAKE_BUILD_TYPE=Release
cmake --build build_release --config Release -j
ctest --test-dir build_release # run all tests
build_release/fork_union_nbody # run the benchmarks
```
For C++ debug builds, consider using the VS Code debugger presets or the following commands:
```bash
cmake -B build_debug -D CMAKE_BUILD_TYPE=Debug
cmake --build build_debug --config Debug # build with Debug symbols
build_debug/fork_union_test_cpp20 # run a single test executable
```
To run static analysis:
```bash
sudo apt install cppcheck clang-tidy
cmake --build build_debug --target cppcheck # detects bugs & undefined behavior
cmake --build build_debug --target clang-tidy # suggest code improvements
```
To include NUMA, Huge Pages, and other optimizations on Linux, make sure to install dependencies:
```bash
sudo apt-get -y install libnuma-dev libnuma1 # NUMA
sudo apt-get -y install libhugetlbfs-dev libhugetlbfs-bin # Huge Pages
sudo ln -s /usr/bin/ld.hugetlbfs /usr/share/libhugetlbfs/ld # Huge Pages linker
```
To build with an alternative compiler, like LLVM Clang, use the following command:
```bash
sudo apt-get install libomp-15-dev clang++-15 # OpenMP version must match Clang
cmake -B build_debug -D CMAKE_BUILD_TYPE=Debug -D CMAKE_CXX_COMPILER=clang++-15
cmake --build build_debug --config Debug
build_debug/fork_union_test_cpp20
```
For Rust, use the following command:
```bash
rustup toolchain install # for Alloc API
cargo miri test # to catch UBs
cargo test --release # to run the tests fast
```
|
https://github.com/NimbleEdge/sparse_transformers
|
sparse_transformers
Sparse Inferencing for transformer based LLMs
Languages: Python (83.0%), C++ (8.7%), Cuda (4.9%), Shell (3.1%), CMake (0.3%)
.github
.github
benchmarks
benchmarks
configs
configs
sparse_transformers
sparse_transformers
src
src
...
.gitignore
.gitignore
CODE_OF_CONDUCT.md
CODE_OF_CONDUCT.md
CONTRIBUTING.md
CONTRIBUTING.md
LICENSE
LICENSE
README.md
README.md
> README.md
[](https://discord.gg/y8WkMncstk)
# Fused Sparse C++ Kernels for Transformers
## Overview
The project implements sparse multiplication and fuses up/down projections in the MLP layers through low rank weight activations.
Work is based on [Deja Vu](https://arxiv.org/abs/2310.17157) and Apple's [LLM in a Flash](https://arxiv.org/abs/2312.11514).
### Benefits
- **1.6-1.8x overall gain in TTFT and TPS** (4-5x gain in MLP Inference)
- **26.4%** reduction in memory usage
- **6.7×** faster index selection and replacement for weight caching
```
┌─────────────────────────────────────────────────────────────────┐
│ Sparse LLM Inference Pipeline │
├─────────────────────────────────────────────────────────────────┤
│ Sparsity Selection │
│ ├─ Hidden States → LoRA Projection (Importance Scoring) │
│ ├─ Binary Mask Generation: (scores > threshold) │
│ └─ Mask Normalization: Union across batch dimension │
├─────────────────────────────────────────────────────────────────┤
│ Differential Weight Caching │
│ ├─ Mask Change Detection: XOR with previous mask │
│ ├─ Paired Replacement: Direct substitution algorithm │
│ └─ Zero-Copy Tensor Views: torch::from_blob references │
├─────────────────────────────────────────────────────────────────┤
│ Sparse Computation │
│ ├─ Concatenated Gate+Up Projection (Fused Operation) │
│ ├─ Element-wise Activation: σ(gate) ⊙ up │
│ └─ Sparse Down Projection: Only active intermediate dims │
└─────────────────────────────────────────────────────────────────┘
```
**Keywords:** Large Language Models, Sparse Inference, Differential Weight Caching
## Performance Benchmarks
State of Implementation:
- [x] Torch CPU kernels for fp16, fp32
- [x] Differential weight caching and selection for dynamic sparsity
- [ ] CUDA kernels for Sparse Inferencing
- [ ] CPU kernels for int8, int32, int64
### CPU Performance
```
Sparse LLaMA 3.2 3B vs LLaMA 3.2 3B (on HuggingFace Implementation):
- Time to First Token (TTFT): 1.51× faster (1.209s → 0.803s)
- Output Generation Speed: 1.79× faster (0.7 → 1.2 tokens/sec)
- Total Throughput: 1.78× faster (0.7 → 1.3 tokens/sec)
- Memory Usage: 26.4% reduction (13.25GB → 9.75GB)
```
### GPU Performance
```
Sparse LLaMA 3.2 3B vs Standard LLaMA 3.2 3B CUDA Results (on HuggingFace Implementation):
- Average time (Sparse): 0.021s
- Average time (Standard): 0.018s
- CUDA Speedups: 0.86x (WIP)
```
## Usage
### Quick Benchmark
```bash
# Run comprehensive benchmark
python benchmark.py \
--device cpu \ # Device: 'cpu' or 'cuda'
--config configs/llama_skip_causal_3b.json \ # Model configuration
--num_runs 50 \ # Number of benchmark runs
--verbose True # Detailed timing output
# Expected output:
# ⚡ TTFT Speedup: 1.51x
# 🚀 Output TPS Speedup: 1.79x
# 📊 Total Throughput Speedup: 1.78x
```
## Implementation Details
### Paired Replacement with Differential Caching
_sparse_transformers/csrc/weight_cache.h_
The weight cache is a class that manages the active weights for the sparse MLP. It differentially updates the MLP tensor memory pool for the next token based on the predicted sparsity mask.
```cpp
class WeightCache {
// Paired replacement algorithm for differential updates
void update_active_weights(const torch::Tensor &mask)
};
```
**Performance Impact:**
- **6.7× faster cache updates**: 29.89ms (naive `index_select`) → 4.46ms (paired replacement)
- **Better cache locality**: Row major for Up Projection and Column major for Down Projection Matrices
- **Contiguous Memory Access**: Single memcpy for cache updates
### Sparse MLP Inference
_sparse_transformers/csrc/sparse_mlp_op.cpp_
```python
sparse_mlp_forward(
x.detach(),
self.weight_cache.get_concat_weight(),
self.weight_cache.get_active_down_weight(),
self.down_proj_buffer,
self.combined_proj_buffer,
"silu"
)
```
**Performance Impact:**
- **5× faster CPU MLP inference**: 30.1ms → 6.02ms
- OpenMP parallelization with `torch::at::parallel_for`
- Bounded memory usage with weight cache memory pool
## Project Structure
```
├── sparse_transformers/ # C++ extension module
│ ├── csrc/
│ │ ├── sparse_mlp_op.cpp # Main CPU/CUDA dispatcher
│ │ ├── sparse_mlp_cuda.cu # CUDA kernels
│ │ └── weight_cache.h # Paired replacement caching
│ ├── __init__.py # Python bindings
│ └── CMakeLists.txt # Build configuration
├── src/models/llama/
│ ├── modelling_llama_skip.py # Statistical sparsity model
│ └── configuration_llama_skip.py # Model configuration
├── tools/
│ └── component_timing.py # Performance profiling
└── run_benchmark.py # End-to-end benchmarks
```
## Installation
### Build C++ Extensions
```bash
# Clone repository
git clone https://github.com/nimbleedge/sparse_transformers.git
cd sparse_transformers
```
Set up conda environment and install dependencies
```bash
conda create -n sparse_transformers python=3.10
conda activate sparse_transformers
```
Install torch dependencies from [requirements.txt](requirements.txt#L2)
```bash
# Install in editable mode (builds C++ extensions automatically)
pip install -r requirements.txt
pip install -e . # Auto-detect (prefer GPU if available)
pip install -e . --build-option=cpu # Force CPU-only build
pip install -e . --build-option=gpu # Force GPU build (fallback to CPU if not available)
# Alternative: Direct setup.py commands
python setup.py develop # Auto-detect (prefer GPU if available)
python setup.py develop cpu # Force CPU-only build
python setup.py develop gpu # Force GPU build (fallback to CPU if not available)
# Verify installation
python -c "import sparse_transformers; print('✅ Installation successful')"
```
## Community engagement
We welcome any feedback or suggestions - please join our
[Discord](https://discord.gg/y8WkMncstk) to engage with the community.
## Contributing
We welcome contributions from the community! Areas of particular interest are:
- **Additional models**: Extend beyond LLaMA to other architectures
- **Quantization**: Combine with INT8/FP16 optimizations
- **Attention Kernels**: Implement Sparse Attention Kernels
Please read our [Contributing Guidelines](CONTRIBUTING.md) to get started.
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
|
https://github.com/adgaultier/caracal
|
caracal
Make your programs stealthier🐝
Languages: Rust (94.6%), C (4.4%), Just (1.0%)
.github
.github
caracal-common
caracal-common
caracal-ebpf
caracal-ebpf
caracal
caracal
...
.gitignore
.gitignore
Cargo.toml
Cargo.toml
Justfile
Justfile
LICENSE
LICENSE
README.md
README.md
> README.md
<div align="center">
<h1>Caracal</h1>
<h3>Make your programs stealthier </h3>
<img src="https://github.com/user-attachments/assets/089060da-1a14-475d-8aa3-e1bfae15e8f7" style="width: 60%; height: auto;">
<p><small><i>The caracal cat is one of Africa's ultimate hunters,<br> a stealthy cat with an exceptional ability to hunt out prey on the savanna</i></small></p>
</div>
⚡ Powered by [Aya](https://aya-rs.dev)🐝
## 💡 Overview
Caracal is a Rust implementation of eBPF techniques that:
1. hide target bpf programs & maps → won't be visible with `bpftop`, `bpftool` ...
2. hide target processes → won't be visible with `ps`, `top`, `procs`, `ls /proc` ...
3. are resilient to some "unhiding" bruteforce techniques
## 📚 Documentation
Jump to:
- [Focus on 1 & 2](caracal/README.md)
- [Focus on 3](caracal-ebpf/src/deunhide/README.md)
## 🚀 Setup
You need a Linux based OS.
### ⚒️ Build from source
To build from source, make sure you have:
- [bpf-linker](https://github.com/aya-rs/bpf-linker) installed.
- [rust](https://www.rust-lang.org/tools/install) installed with `nightly` toolchain.
#### 1. Build ebpf program
```
cd caracal-ebpf && cargo build --release
```
#### 2. Build user space program
```
cargo build --release
```
This command will produce `caracal` executable in `target/release` that you can add to your`$PATH`
### 📥 Binary release
You can download the pre-built binaries from the [release page](https://github.com/adgaultier/caracal/releases)
<br>
## 🪄 Usage
Run `caracal` with root privileges:
```
caracal --pid <pids> --bpf-prog-id <bpf-ids> -v
```
- `<pids>`: List of process IDs to hide (comma-separated, e.g., 123,456)
- `<bpf-ids>`: List of eBPF program IDs to hide (comma-separated, e.g., 789,101)
- `-v / --verbose`: Verbosity
Example:
```
sudo caracal --pid $PPID,1337 --bpf-prog-id 23,24,26 -v
```
will hide:
- `caracal` launching process & its children
- 1337 process & its children
- `caracal` eBPF program & maps
- 23,24,26 eBPF programs & maps
## ⚠️ Disclaimer
`caracal` is developed for educational purposes only
<br>
## ✍️ Authors
[Adrien Gaultier](https://github.com/adgaultier)
<br>
## ⚖️ License
GPLv3
|
https://github.com/iilegacyyii/DataInject-BOF
|
DataInject-BOF
Hijacks code execution via overwriting Control Flow Guard pointers in combase.dll
Languages: C (98.6%), Makefile (1.4%)
dist
dist
...
.gitattributes
.gitattributes
.gitignore
.gitignore
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
> README.md
# Data Inject BOF
A beacon object file implementation of the process injection proof-of-concept from my blog post [Control Flow Hijacking via Data Pointers](https://www.legacyy.xyz/defenseevasion/windows/2025/04/16/control-flow-hijacking-via-data-pointers.html).
Hijacks control flow via overwriting `combase.dll`'s Control Flow Guard function pointers called by COM proxying functions.
## Important Notes
- From my testing, `explorer.exe` is the current best candidate in terms of an easy triggering mechanism due to its heavy reliance on COM proxying. Would recommend experimenting.
- **Make sure** shellcode is 64-bit as this BOF only supports 64-bit beacons & target processes.
- This has only been tested on windows versions `Win10 21H2 (19044.5737)` & `Win11 24H2 (26100.3775)`.
## Usage
```
datainject <pid> <shellcode path>
```
### Examples
For sake of example, all process id's have been assumed to be `1234`
**Inject into explorer.exe, execute shellcode upon COM call (can be triggered by right clicking or opening file explorer)**
```
datainject 1234 C:\users\attacker\payloads\beacon_x64.bin
```
## References
- [Control Flow Hijacking via Data Pointers](https://www.legacyy.xyz/defenseevasion/windows/2025/04/16/control-flow-hijacking-via-data-pointers.html) - My blog post teaching my methodology to weaponising this.
- [Threadless Inject](https://github.com/CCob/ThreadlessInject) - The project that inspired me to start this research.
|
https://github.com/tchebb/openwv
|
openwv
Open reimplementation of Google's Widevine Content Decryption Module for browsers
Languages: Rust (94.4%), C++ (5.6%)
src
src
third-party
third-party
...
.clang-format
.clang-format
.gitignore
.gitignore
.gitmodules
.gitmodules
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
> README.md
OpenWV is a free and open-source reimplementation of Google's Widevine Content
Decryption Module (CDM), the portion of the Widevine DRM system that runs in
your browser, obtains content keys for protected media, and decrypts the media
using those keys. OpenWV is a drop-in replacement for Google's [official,
proprietary CDM][official-cdm] and implements the same [shared library
API][chromium-cdm-api].
OpenWV does **not** come with a device identity and will not work without one.
A device identity, typically stored as a [`.wvd` file][pywidevine], contains
metadata about a Widevine client as well as a private key that authenticates
that client to Widevine license servers. Depending on the client's identity, a
license server may return low-value content keys (e.g. standard definition
only), high-value keys (e.g. HD/UHD), or no keys at all. If you want to use
OpenWV, you must obtain an appropriate `.wvd` file yourself and include it in
the build as described below.
[official-cdm]: https://github.com/mozilla-firefox/firefox/blob/main/toolkit/content/gmp-sources/widevinecdm.json
## Compilation
Because CDM libraries are heavily sandboxed by browsers, OpenWV cannot read
configuration from disk at runtime. That means that all configuration,
including the device identity mentioned above, must be present at build-time.
As such, there are no official precompiled binaries: **the only way to use
OpenWV is to build it yourself**.
To build OpenWV, follow these steps:
1. Make sure that [Git][git], [Rust][rust], and [Clang][clang-install] are
installed on your system. (To install Clang on Windows 10/11, run
`winget install LLVM.LLVM`.)
2. Clone this repository and its submodule, telling Git to keep the two in sync:
`git clone --recurse-submodules -c submodule.recurse=true https://github.com/tchebb/openwv.git`
3. Place your `.wvd` file in the project root (alongside this README) and name
it `embedded.wvd`. You may set other configuration options as desired by
editing the `CONFIG` variable in `src/config.rs`.
4. Build the library: `cargo build --release`
5. Find the built library in `target/release/`. Depending on your OS, it will
be named `libwidevinecdm.so`, `widevinecdm.dll`, or `libwidevinecdm.dylib`.
[git]: https://git-scm.com/downloads
[rust]: https://rustup.rs/
[clang-install]: https://rust-lang.github.io/rust-bindgen/requirements.html#installing-clang
## Installation
*NOTE: In these instructions, "the OpenWV library" means the library you built
in the last section—`libwidevinecdm.so` on Linux, `widevinecdm.dll` on Windows,
or `libwidevinecdm.dylib` on macOS.*
### Firefox
1. Open `about:support` and note your "Profile Directory".
2. Open `about:config`. Set `media.gmp-widevinecdm.autoupdate` to `false`
(creating it if needed), and set `media.gmp-widevinecdm.version` to `openwv`
(or to any other name for the directory you create in step 3).
3. Navigate to `gmp-widevinecdm/` within your profile directory.
4. Create a subdirectory named `openwv` and place the OpenWV library and
`manifest-firefox.json`, renamed to `manifest.json`, inside it. Note that
you **must** use OpenWV's `manifest.json` instead of Google's, as Firefox
will not play video if we falsely advertise decoding support.
**If you manually check for addon updates, Firefox will replace OpenWV with
Google's CDM**. The `media.gmp-widevinecdm.autoupdate` setting prevents
automatic updates, but [there's no way][firefox-updater] to prevent manual
updates. If this happens, you need only set `media.gmp-widevinecdm.version` back
to `openwv`—no need to repeat the other steps.
### Chrome/Chromium
1. Open `chrome://version/` and note the **parent** directory of your "Profile
Path". This is Chrome's "User Data Directory".
2. Navigate to `WidevineCdm/` within the User Data Directory.
3. If there are any existing subdirectories, delete them.
4. Create a subdirectory named `9999` (or any numeric version greater than that
of Google's CDM), and place OpenWV's `manifest-chromium.json`, renamed to
`manifest.json`, inside it.
5. Beside `manifest.json`, create a directory named `_platform_specific` with
a directory named `{linux,win,mac}_{x86,x64,arm,arm64}`, as appropriate,
inside it. For example, `_platform_specific/linux_x64/` on 64-bit Intel
Linux. Place the OpenWV library in this innermost directory.
6. On Linux only, launch and quit the browser once before playing any
Widevine-protected media. OpenWV will not be loaded on the first launch due
to an [implementation quirk][chromium-hint] of Chromium.
### Kodi (via [InputStream Adaptive](https://github.com/xbmc/inputstream.adaptive))
1. Build OpenWV with `encrypt_client_id: EncryptClientId::Never`, as Kodi
cannot handle service certificate request messages as of this writing
(InputStream Adaptive v21.5.10).
2. In Kodi, navigate to "Add-ons > My add-ons > VideoPlayer InputStream >
InputStream Adaptive" and select "Configure".
3. Ensure the settings level (the gear icon) is set to at least "Advanced".
4. In the "Expert" tab, set "Decrypter path" to the directory where you've put
the OpenWV library. Don't include the library name itself.
[firefox-updater]: https://github.com/mozilla-firefox/firefox/blob/FIREFOX_139_0_RELEASE/toolkit/mozapps/extensions/internal/GMPProvider.sys.mjs#L391-L455
[chromium-hint]: https://source.chromium.org/chromium/chromium/src/+/refs/tags/137.0.7151.59:chrome/common/media/cdm_registration.cc;l=163-187
## References
The APIs, algorithms, and data types used in OpenWV were gathered from a
variety of official and unofficial sources:
- API headers (`third-party/cdm/`) come from [the Chromium source][chromium-cdm-api].
- Widevine protobuf definitions (`third-party/widevine_protos.pb`) were
extracted from `chromecast_oss/chromium/src/out_chromecast_steak/release/pyproto/`
in Google's [Chromecast Ultra v1.42 source drop][steak-1.42-oss].
- The `.wvd` format and many algorithmic details come from the [pywidevine][pywidevine]
project.
[chromium-cdm-api]: https://chromium.googlesource.com/chromium/cdm/
[pywidevine]: https://github.com/devine-dl/pywidevine/
[steak-1.42-oss]: https://drive.google.com/file/d/153TuZqh9FTBKRabGx686tbJefeqM2sJf/view?usp=drive_link
|
https://github.com/ASIG-X/RESPLE
|
RESPLE
The first 6-DoF spline-based recursive motion esimator for LiDAR-based odometry
Languages: C++ (92.8%), Python (4.4%), CMake (2.2%), Dockerfile (0.6%)
AviaResple_msgs
AviaResple_msgs
HAP360_msgs
HAP360_msgs
Mid70Avia_msgs
Mid70Avia_msgs
doc
doc
estimate_msgs
estimate_msgs
...
.gitignore
.gitignore
Dockerfile
Dockerfile
LICENSE
LICENSE
README.md
README.md
> README.md
# RESPLE: Recursive Spline Estimation for LiDAR-Based Odometry
[**YouTube**](https://youtu.be/3-xLRRT25ys) | **[arXiv](https://arxiv.org/abs/2504.11580)** | **[Website](https://asig-x.github.io/resple_web/)**
This is the offcial repository for RESPLE, the first B-spline-based recursive state estimation framework for estimating 6-DoF dynamic motions. Using RESPLE as the estimation backbone, we developed a unified suite of direct LiDAR-based odometry systems, including:
* LiDAR-only odometry (LO)
* LiDAR-inertial odometry (LIO)
* Multi-LiDAR odometry (MLO)
* Multi-LiDAR-inertial Odometry (MLIO)
These four variants have been tested in real-world datasets and our own experiments, covering aerial, wheeled, legged, and wearable platforms operating in indoor, urban, wild environments with diverse LiDAR types. We look forward to your comments and feedback!
### BibTex Citation
```
@ARTICLE{cao2025resple,
author={Cao, Ziyu and Talbot, William and Li, Kailai},
title={RESPLE: Recursive Spline Estimation for LiDAR-Based Odometry},
journal={arXiv preprint arXiv:2504.11580},
year={2025}
}
```
### Dependencies
Tested with [ROS2 Humble](https://docs.ros.org/en/humble/Installation.html) on Ubuntu 22.04
```
sudo apt install libomp-dev libpcl-dev libeigen3-dev
sudo apt install ros-humble-pcl*
# Optional: sudo apt install ros-humble-rosbag2-storage-mcap (for playing .mcap file if testing GrandTour dataset)
```
### Compilation
```
cd ~/ros2_ws/src
git clone --recursive git@github.com:ASIG-X/RESPLE.git
cd ..
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select estimate_msgs livox_ros_driver livox_interfaces livox_ros_driver2 resple
```
## Docker Build
To build a docker image capable of running the examples and dataset:
```bash
cd ~/path/to/src
git clone --recursive git@github.com:ASIG-X/RESPLE.git
cd RESPLE
docker build --ssh default --tag resple .
```
## Own experimental datasets ([LINK to SURFdrive](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l))
Password: RESPLE2025
<!--  -->
<!-- [](https://youtu.be/2OvjGnxszf8) -->
<div align="left">
<img src="doc/hemdyn_clip.gif" width=49.6% />
<img src="doc/Rcampus_clip.gif" width = 49.6% >
</div>
<br>
**HelmDyn (Helm Dynamic) dataset**
* 1 Livox Mid360 mounted on a helmet as a mobile platform
* 10 sequences recorded with very dynamic motions combining walking, running, jumping, and in-hand waving within a cubic space
* Ground truth trajectory recorded using a high-precision (submillimeter), low-latency motion capture system (Qualisys) invovling 20 cameras
**R-Campus dataset**
* 1 Livox Avia mounted on a bipedal wheeled robot (Direct Drive DIABLO)
* 1 sequence in walking speed recorded in a large-scale campus environment
* Trajectory starts and ends at the same location point.
## Usage
For LIO use, change `if_lidar_only` in `resple/config/config_xxx.yaml` to `false`.
* [HelmDyn](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l) dataset (Livox Mid360)
```
source install/setup.bash
ros2 launch resple resple_helmdyn01.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [R-Campus](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l) dataset (Livox Avia)
```
source install/setup.bash
ros2 launch resple resple_r_campus.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [NTU VIRAL](https://ntu-aris.github.io/ntu_viral_dataset/) dataset (OUSTER OS1-16)
```
source install/setup.bash
ros2 launch resple resple_eee_02.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [MCD](https://mcdviral.github.io/) dataset (Livox Mid70)
```
source install/setup.bash
ros2 launch resple resple_ntu_day_01.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* GrandTour (Hesai XT32, Livox Mid360)
```
source install/setup.bash
ros2 launch resple resple_heap_testsite_hoenggerberg.launch.py
# ros2 launch resple resple_jungfraujoch_tunnel_small.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/hesai_livox_ap20_converted.mcap
```
### Docker
With the docker image built (see docker build instructions), one can run the run the algorithm in a docker container by following these steps.
Allow the docker user to generate graphics:
```bash
xhost +local:docker
```
Replacing `/path/to/data` with the location of the datasets, run the container (with mounted source code for development):
```bash
docker run -it -e DISPLAY=$DISPLAY \
-v .:/root/ros2_ws/src/RESPLE \
-v /tmp/.X11-unix/:/tmp/.X11-unix/ \
-v ~/data/resple_dataset/:/root/data/resple_dataset \
-v ~/data/grand_tour_box/datasets:/root/data/grand_tour_box/datasets \
--name resple resple
```
Note: To recompile inside the docker container run `colcon build --packages-up-to resple`. If no development is intended, then one can omit `-v .:/root/ros2_ws/src/RESPLE`.
Replacing `<filename>` with the launch file from above, launch with:
```bash
ros2 launch resple <filename>.launch.py
```
Create a second terminal attached to the container with:
```bash
docker exec -it resple bash
```
In this second container, replacing `<example>/<filename>` to make a valid bag filepath, play the dataset:
```bash
ros2 bag play ~/data/resple_dataset/<example>/
```
If the container is already run, then:
* It can be removed with:
```bash
docker rm resple
```
* It can be started with:
```bash
docker start resple
```
* It can be be attached to with:
```bash
docker attach resple
```
* It can be stopped with:
```bash
docker stop resple
```
## Contributors
Ziyu Cao (Email: ziyu.cao@liu.se)
William Talbot (Email: wtalbot@ethz.ch)
Kailai Li (Email: kailai.li@rug.nl)
## Credits
Thanks for [SFUISE](https://github.com/ASIG-X/SFUISE), [ikd-Tree](https://github.com/hku-mars/ikd-Tree), [FAST-LIO](https://github.com/hku-mars/FAST_LIO), [Livox-SDK](https://github.com/Livox-SDK), and [basalt](https://gitlab.com/VladyslavUsenko/basalt).
## License
The source code is released under [GPLv3](https://www.gnu.org/licenses/) license.
|
https://github.com/jefferythewind/warpgbm
|
warpgbm
WarpGBM: High-Speed Gradient Boosting
Languages: Python (74.9%), Cuda (21.7%), C++ (3.4%)
.github/workflows
.github/workflows
examples
examples
tests
tests
warpgbm
warpgbm
...
.gitignore
.gitignore
LICENSE
LICENSE
MANIFEST.in
MANIFEST.in
README.md
README.md
pyproject.toml
pyproject.toml
> README.md

# WarpGBM
WarpGBM is a high-performance, GPU-accelerated Gradient Boosted Decision Tree (GBDT) library built with PyTorch and CUDA. It offers blazing-fast histogram-based training and efficient prediction, with compatibility for research and production workflows.
**New in v1.0.0:** WarpGBM introduces *Invariant Gradient Boosting* — a powerful approach to learning signals that remain stable across shifting environments (e.g., time, regimes, or datasets). Powered by a novel algorithm called **[Directional Era-Splitting (DES)](https://arxiv.org/abs/2309.14496)**, WarpGBM doesn't just train faster than other leading GBDT libraries — it trains smarter.
If your data evolves over time, WarpGBM is the only GBDT library designed to *adapt and generalize*.
---
## Contents
- [Features](#features)
- [Benchmarks](#benchmarks)
- [Installation](#installation)
- [Learning Invariant Signals Across Environments](#learning-invariant-signals-across-environments)
- [Why This Matters](#why-this-matters)
- [Visual Intuition](#visual-intuition)
- [Key References](#key-references)
- [Examples](#examples)
- [Quick Comparison with LightGBM CPU version](#quick-comparison-with-lightgbm-cpu-version)
- [Pre-binned Data Example (Numerai)](#pre-binned-data-example-numerai)
- [Documentation](#documentation)
- [Acknowledgements](#acknowledgements)
- [Version Notes](#version-notes)
## Features
- **Blazing-fast GPU training** with custom CUDA kernels for binning, histogram building, split finding, and prediction
- **Invariant signal learning** via [Directional Era-Splitting (DES)](https://arxiv.org/abs/2309.14496) — designed for datasets with shifting environments (e.g., time, regimes, experimental settings)
- Drop-in **scikit-learn style interface** for easy adoption
- Supports **pre-binned data** or **automatic quantile binning**
- Works with `float32` or `int8` inputs
- Built-in **validation and early stopping** support with MSE, RMSLE, or correlation metrics
- Simple install with `pip`, no custom drivers required
> 💡 **Note:** WarpGBM v1.0.0 is a *generalization* of the traditional GBDT algorithm.
> To run standard GBM training at maximum speed, simply omit the `era_id` argument — WarpGBM will behave like a traditional booster but with industry-leading performance.
---
## Benchmarks
### Scikit-Learn Synthetic Data: 1 Million Rows and 1,000 Features
In this benchmark we compare the speed and in-sample correlation of **WarpGBM v1.0.0** against LightGBM, XGBoost and CatBoost, all with their GPU-enabled versions. This benchmark runs on Google Colab with the L4 GPU environment.
```
WarpGBM: corr = 0.8882, train = 17.4s, infer = 3.2s
XGBoost: corr = 0.8877, train = 33.2s, infer = 8.0s
LightGBM: corr = 0.8604, train = 29.8s, infer = 1.6s
CatBoost: corr = 0.8935, train = 392.1s, infer = 379.2s
```
Colab Notebook: https://colab.research.google.com/drive/16U1kbYlD5HibGbnF5NGsjChZ1p1IA2pK?usp=sharing
---
## Installation
### Recommended (GitHub, always latest):
```bash
pip install git+https://github.com/jefferythewind/warpgbm.git
```
This installs the latest version directly from GitHub and compiles CUDA extensions on your machine using your **local PyTorch and CUDA setup**. It's the most reliable method for ensuring compatibility and staying up to date with the latest features.
### Alternatively (PyPI, stable releases):
```bash
pip install warpgbm
```
This installs from PyPI and also compiles CUDA code locally during installation. This method works well **if your environment already has PyTorch with GPU support** installed and configured.
> **Tip:**\
> If you encounter an error related to mismatched or missing CUDA versions, try installing with the following flag. This is currently required in the Colab environments.
>
> ```bash
> pip install warpgbm --no-build-isolation
> ```
### Windows
Thank you, ShatteredX, for providing working instructions for a Windows installation.
```
git clone https://github.com/jefferythewind/warpgbm.git
cd warpgbm
python setup.py bdist_wheel
pip install .\dist\warpgbm-0.1.15-cp310-cp310-win_amd64.whl
```
Before either method, make sure you’ve installed PyTorch with GPU support:\
[https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
---
## Learning Invariant Signals Across Environments
Most supervised learning models rely on an assumption known as the **Empirical Risk Minimization (ERM)** principle. Under ERM, the data distribution connecting inputs \( X \) and targets \( Y \) is assumed to be **fixed** and **stationary** across training, validation, and test splits. That is:
> The patterns you learn from the training set are expected to generalize out-of-sample — *as long as the test data follows the same distribution as the training data.*
However, this assumption is often violated in real-world settings. Data frequently shifts across time, geography, experimental conditions, or other hidden factors. This phenomenon is known as **distribution shift**, and it leads to models that perform well in-sample but fail catastrophically out-of-sample.
This challenge motivates the field of **Out-of-Distribution (OOD) Generalization**, which assumes your data is drawn from **distinct environments or eras** — e.g., time periods, customer segments, experimental trials. Some signals may appear predictive within specific environments but vanish or reverse in others. These are called **spurious signals**. On the other hand, signals that remain consistently predictive across all environments are called **invariant signals**.
WarpGBM v1.0.0 introduces **Directional Era-Splitting (DES)**, a new algorithm designed to identify and learn from invariant signals — ignoring signals that fail to generalize across environments.
---
### Why This Matters
- Standard models trained via ERM can learn to exploit **spurious correlations** that only hold in some parts of the data.
- DES explicitly tests whether a feature's split is **directionally consistent** across all eras — only such *invariant splits* are kept.
- This approach has been shown to reduce overfitting and improve out-of-sample generalization, particularly in financial and scientific datasets.
---
### Visual Intuition
We contrast two views of the data:
- **ERM Setting**: All data is assumed to come from the same source (single distribution).\
No awareness of environments — spurious signals can dominate.
- **OOD Setting (Era-Splitting)**: Data is explicitly grouped by environment (era).\
The model checks whether a signal holds across all groups — enforcing **robustness**.
<img src="https://github.com/user-attachments/assets/2be11ef3-6f2e-4636-ab91-307a73add247" alt="ChatGPT Image May 28, 2025, 05_05_09 PM" width="320"/>
---
### Key References
- **Invariant Risk Minimization (IRM)**: [Arjovsky et al., 2019](https://arxiv.org/abs/1907.02893)
- **Learning Explanations That Are Hard to Vary**: [Parascandolo et al., 2020](https://arxiv.org/abs/2009.00329)
- **Era Splitting: Invariant Learning for Decision Trees**: [DeLise, 2023](https://arxiv.org/abs/2309.14496)
---
WarpGBM is the **first open-source GBDT framework to integrate this OOD-aware approach natively**, using efficient CUDA kernels to evaluate per-era consistency during tree growth. It’s not just faster — it’s smarter.
---
## Examples
WarpGBM is easy to drop into any supervised learning workflow and comes with curated examples in the `examples/` folder.
- `Spiral Data.ipynb`: synthetic OOD benchmark from Learning Explanations That Are Hard to Vary
### Quick Comparison with LightGBM CPU version
```python
import numpy as np
from sklearn.datasets import make_regression
from time import time
import lightgbm as lgb
from warpgbm import WarpGBM
# Create synthetic regression dataset
X, y = make_regression(n_samples=100_000, n_features=500, noise=0.1, random_state=42)
X = X.astype(np.float32)
y = y.astype(np.float32)
# Train LightGBM
start = time()
lgb_model = lgb.LGBMRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, max_bin=7)
lgb_model.fit(X, y)
lgb_time = time() - start
lgb_preds = lgb_model.predict(X)
# Train WarpGBM
start = time()
wgbm_model = WarpGBM(max_depth=5, n_estimators=100, learning_rate=0.01, num_bins=7)
wgbm_model.fit(X, y)
wgbm_time = time() - start
wgbm_preds = wgbm_model.predict(X)
# Results
print(f"LightGBM: corr = {np.corrcoef(lgb_preds, y)[0,1]:.4f}, time = {lgb_time:.2f}s")
print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, y)[0,1]:.4f}, time = {wgbm_time:.2f}s")
```
**Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
```
LightGBM: corr = 0.8742, time = 37.33s
WarpGBM: corr = 0.8621, time = 5.40s
```
---
### Pre-binned Data Example (Numerai)
WarpGBM can save additional training time if your dataset is already pre-binned. The Numerai tournament data is a great example:
```python
import pandas as pd
from numerapi import NumerAPI
from time import time
import lightgbm as lgb
from warpgbm import WarpGBM
import numpy as np
napi = NumerAPI()
napi.download_dataset('v5.0/train.parquet', 'train.parquet')
train = pd.read_parquet('train.parquet')
feature_set = [f for f in train.columns if 'feature' in f]
target = 'target_cyrus'
X_np = train[feature_set].astype('int8').values
Y_np = train[target].values
# LightGBM
start = time()
lgb_model = lgb.LGBMRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, max_bin=7)
lgb_model.fit(X_np, Y_np)
lgb_time = time() - start
lgb_preds = lgb_model.predict(X_np)
# WarpGBM
start = time()
wgbm_model = WarpGBM(max_depth=5, n_estimators=100, learning_rate=0.01, num_bins=7)
wgbm_model.fit(X_np, Y_np)
wgbm_time = time() - start
wgbm_preds = wgbm_model.predict(X_np)
# Results
print(f"LightGBM: corr = {np.corrcoef(lgb_preds, Y_np)[0,1]:.4f}, time = {lgb_time:.2f}s")
print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, Y_np)[0,1]:.4f}, time = {wgbm_time:.2f}s")
```
**Results (Google Colab Pro, A100 GPU):**
```
LightGBM: corr = 0.0703, time = 643.88s
WarpGBM: corr = 0.0660, time = 49.16s
```
---
## Documentation
### `WarpGBM` Parameters:
- `num_bins`: Number of histogram bins to use (default: 10)
- `max_depth`: Maximum depth of trees (default: 3)
- `learning_rate`: Shrinkage rate applied to leaf outputs (default: 0.1)
- `n_estimators`: Number of boosting iterations (default: 100)
- `min_child_weight`: Minimum sum of instance weight needed in a child (default: 20)
- `min_split_gain`: Minimum loss reduction required to make a further partition (default: 0.0)
- `histogram_computer`: Choice of histogram kernel (`'hist1'`, `'hist2'`, `'hist3'`) (default: `'hist3'`)
- `threads_per_block`: CUDA threads per block (default: 32)
- `rows_per_thread`: Number of training rows processed per thread (default: 4)
- `L2_reg`: L2 regularizer (default: 1e-6)
- `colsample_bytree`: Proportion of features to subsample to grow each tree (default: 1)
### Methods:
```
.fit(
X, # numpy array (float or int) 2 dimensions (num_samples, num_features)
y, # numpy array (float or int) 1 dimension (num_samples)
era_id=None, # numpy array (int) 1 dimension (num_samples)
X_eval=None, # numpy array (float or int) 2 dimensions (eval_num_samples, num_features)
y_eval=None, # numpy array (float or int) 1 dimension (eval_num_samples)
eval_every_n_trees=None, # const (int) >= 1
early_stopping_rounds=None, # const (int) >= 1
eval_metric='mse' # string, one of 'mse', 'rmsle' or 'corr'. For corr, loss is 1 - correlation(y_true, preds)
)
```
Train with optional validation set and early stopping.
```
.predict(
X # numpy array (float or int) 2 dimensions (predict_num_samples, num_features)
)
```
Predict on new data, using parallelized CUDA kernel.
---
## Acknowledgements
WarpGBM builds on the shoulders of PyTorch, scikit-learn, LightGBM, and the CUDA ecosystem. Thanks to all contributors in the GBDT research and engineering space.
---
## Version Notes
### v0.1.21
- Vectorized predict function replaced with CUDA kernel (`warpgbm/cuda/predict.cu`), parallelizing per sample, per tree.
### v0.1.23
- Adjust gain in split kernel and added support for an eval set with early stopping based on MSE.
### v0.1.25
- Added `colsample_bytree` parameter and new test using Numerai data.
### v0.1.26
- Fix Memory bugs in prediction and colsample bytree logic. Added "corr" eval metric.
### v1.0.0
- Introduce invariant learning via directional era splitting (DES). Also streamline VRAM improvements over previous sub versions.
|
https://github.com/ga2mer/MarathonRecomp
|
MarathonRecomp
An unofficial PC port of the Xbox 360 version of Sonic the Hedgehog (2006) created through the process of static recompilation
Languages: C++ (92.5%), CMake (3.4%), HLSL (1.9%), Metal (1.7%)
.github
.github
MarathonRecomp
MarathonRecomp
MarathonRecompLib
MarathonRecompLib
docs
docs
flatpak
flatpak
...
.editorconfig
.editorconfig
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
CMakePresets.json
CMakePresets.json
> README.md
<p align="center">
<img src="https://raw.githubusercontent.com/IsaacMarovitz/MarathonRecompResources/refs/heads/main/images/logo/Logo.png" width="512"/>
</p>
---
> [!CAUTION]
> This recompilation is still under active development and is NOT meant for public use. Support will not be provided until an official release.
Marathon Recompiled is an unofficial PC port of the Xbox 360 version of Sonic the Hedgehog (2006) created through the process of static recompilation. The port offers Windows, Linux, and macOS support.
**This project does not include any game assets. You must provide the files from your own legally acquired copy of the game to install or build Marathon Recompiled.**
[XenonRecomp](https://github.com/sonicnext-dev/XenonRecomp) and [XenosRecomp](https://github.com/sonicnext-dev/XenosRecomp) are the main recompilers used for converting the game's original PowerPC code and Xenos shaders into compatible C++ and HLSL code respectively. The development of these recompilers was directly inspired by [N64: Recompiled](https://github.com/N64Recomp/N64Recomp), which was used to create [Zelda 64: Recompiled](https://github.com/Zelda64Recomp/Zelda64Recomp).
## Table of Contents
- [Known Issues](#known-issues)
- [FAQ](#faq)
- [Building](#building)
- [Credits](#credits)
## Known Issues
Before reporting any issues, check if they are listed [here](https://github.com/sonicnext-dev/MarathonRecomp/issues).
### Original Game Bugs
Game bugs present on the original hardware are intentionally preserved and will not be fixed apart from a few minor exceptions in [#44](https://github.com/sonicnext-dev/MarathonRecomp/issues/44). Please do not report issues for these bugs and verify that the issue does not occur on original hardware before reporting. Bug reports for issues found in the original game will be rejected. Bugs that only happen in Marathon Recompiled must be accompanied by footage captured on original Xbox 360 hardware showing that the bug does not happen there.
### File Picker Unavailable on Steam Deck in Game Mode
Due to some restrictions of how the desktop environment on the Steam Deck works whilst in Game Mode, please note that you may need to at least first boot into Desktop Mode to be able to use the file picker to provide the game files.
Simply booting at least once in Desktop Mode will enable the Deck to use the file picker when going back to Game Mode. You can complete the entire installation process while in Desktop Mode to save yourself the trouble of browsing through Game Mode if necessary.
## FAQ
### Do you have a website?
Marathon Recompiled does not have an official website.
**Please link here when directing anyone to the project.**
> [!CAUTION]
> Do not download builds of Marathon Recompiled from anywhere but our [Releases](https://github.com/sonicnext-dev/MarathonRecomp/releases/latest) page.
>
> **We will never distribute builds on other websites, via Discord servers or via third-party update tools.**
### Why does the installer say my files are invalid?
The installer may display this error for several reasons. Please check the following to ensure your files are valid:
- Please read the [How to Install](#how-to-install) section and make sure you've acquired all of the necessary files correctly.
- Verify that you're not trying to add compressed files such as `.zip`, `.7z`, `.rar` or other formats.
- Only use the **Add Folder** option if you're sure you have a directory with the content's files already extracted, which means it'll only contain files like `.xex`, `.ar.00`, `.arl` and others. **This option will not scan your folder for compatible content**.
- Ensure that the files you've acquired correspond to the same region. **Discs and Title Updates from different regions can't be used together** and will fail to generate a patch.
- The installer will only accept **original and unmodified files**. Do not attempt to provide modified files to the installer.
### What are the keyboard bindings?
Pad|Key
-|-
A (Cross)|S
B (Circle)|D
X (Square)|A
Y (Triangle)|W
D-Pad - Up|Unbound
D-Pad - Down|Unbound
D-Pad - Left|Unbound
D-Pad - Right|Unbound
Start|Return
Back (Select)|Backspace
Left Trigger (L2)|1
Right Trigger (R2)|3
Left Bumper (L1)|Q
Right Bumper (R1)|E
Left Stick - Up|Up Arrow
Left Stick - Down|Down Arrow
Left Stick - Left|Left Arrow
Left Stick - Right|Right Arrow
Right Stick - Up|Unbound
Right Stick - Down|Unbound
Right Stick - Left|Unbound
Right Stick - Right|Unbound
---
You can change the keyboard bindings by editing `config.toml` located in the [configuration directory](#where-is-the-save-data-and-configuration-file-stored), although using a controller is highly recommended until Action Remapping is added in a future update.
Refer to the left column of [this enum template](https://github.com/sonicnext-dev/MarathonRecomp/blob/main/MarathonRecomp/user/config.cpp#L40) for a list of valid keys.
*The default keyboard layout is based on Devil's Details' keyboard layout for Sonic Generations (2011)*.
### Where is the save data and configuration file stored?
The save data and configuration files are stored at the following locations:
- Windows: `%APPDATA%\MarathonRecomp\`
- Linux: `~/.config/MarathonRecomp/`
You will find the save data under the `save` folder. The configuration file is named `config.toml`.
### I want to update the game. How can I avoid losing my save data? Do I need to reinstall the game?
Updating the game can be done by simply copying and replacing the files from a [release](https://github.com/sonicnext-dev/MarathonRecomp/releases) on top of your existing installation. **Your save data and configuration will not be lost.** You won't need to reinstall the game, as the game files will always remain the same across versions of Marathon Recompiled.
### How can I force the game to store the save data and configuration in the installation folder?
You can make the game ignore the [default configuration paths](#where-is-the-save-data-and-configuration-file-stored) and force it to save everything in the installation directory by creating an empty `portable.txt` file. You are directly responsible for the safekeeping of your save data and configuration if you choose this option.
### How can I force the game to run the installation again?
While it's unlikely you'll need to do this unless you've modified your game files by accident, you can force the installer to run again by using the launch argument: `--install`.
### How can I force the game to run under X11 or Wayland?
Use either of the following arguments to force SDL to run under the video driver you want:
- X11: `--sdl-video-driver x11`
- Wayland: `--sdl-video-driver wayland`
The second argument will be passed directly to SDL as a hint to try to initialize the game with your preferred option.
### Where is the game data for the Flatpak version installed?
Given it is not possible to run the game where the Flatpak is stored, the game data will be installed to `~/.var/app/io.github.sonicnext_dev.marathonrecomp/data`. The Flatpak build will only recognize this directory as valid. Feel free to reuse this data directory with a native Linux build if you wish to switch in the future.
If you wish to move this data to another location, you can do so by creating a symlink from this directory to the one where you'll migrate your installation to.
> [!WARNING]
> Using external frame rate limiters or performance overlays may degrade performance or have negative consequences.
### Can I install the game with a PlayStation 3 copy?
**You cannot use the files from the PlayStation 3 version of the game.** Supporting these files would require an entirely new recompilation, as they have proprietary formatting that only works on PS3 and the code for these formats is only present in that version. All significant differences present in the PS3 version of the game have been included in this project as options.
### Why is the game detecting my PlayStation controller as an Xbox controller?
If you're using a third-party input translation layer (such as DS4Windows or Steam Input), it is recommended that you disable these for full controller support.
### What other platforms will be supported?
This project does not plan to support any more platforms other than Windows, Linux and macOS at the moment. Any contributors who wish to support more platforms should do so through a fork.
## Building
[Check out the building instructions here](/docs/BUILDING.md).
## Credits
### Marathon Recompiled
- [ga2mer](https://github.com/ga2mer): Creator and Lead Developer of the recompilation.
- [Rei-san](https://github.com/ReimousTH): Game Internals Researcher and Patch Developer.
- [squidbus](https://github.com/squidbus): Graphics Developer.
- [IsaacMarovitz](https://github.com/IsaacMarovitz): Graphics & Installer Developer.
- [Hyper](https://github.com/hyperbx): Custom menus and Game Internals Researcher.
- [LJSTAR](https://github.com/LJSTARbird): Artist behind the project logo.
- [Skyth](https://github.com/blueskythlikesclouds): Lead Developer of Unleashed Recompiled and endlessly helpful resource.
- [Darío](https://github.com/DarioSamo): Maintainer of [Plume](https://github.com/renderbag/plume) & Graphics Developer.
- [Hotline Sehwani](https://www.youtube.com/watch?v=8mfOSTcTQNs): Artist behind installer music.
- [Syko](https://x.com/UltraSyko): Helped in identified fonts used in original SonicNext logo.
### Unleashed Recompiled
- [Skyth](https://github.com/blueskythlikesclouds)
- [Sajid](https://github.com/Sajidur78)
- [Hyper](https://github.com/hyperbx)
- [Darío](https://github.com/DarioSamo)
- [ĐeäTh](https://github.com/DeaTh-G)
- [RadiantDerg](https://github.com/RadiantDerg)
- [PTKay](https://github.com/PTKay)
- [SuperSonic16](https://github.com/thesupersonic16)
- [NextinHKRY](https://github.com/NextinMono)
- [LadyLunanova](https://linktr.ee/ladylunanova)
- [LJSTAR](https://github.com/LJSTARbird)
- [saguinee](https://twitter.com/saguinee)
- [Goalringmod27](https://linktr.ee/goalringmod27)
- [M&M](https://github.com/ActualMandM)
- [DaGuAr](https://twitter.com/TheDaguar)
- [brianuuuSonic](https://github.com/brianuuu)
- [Kitzuku](https://github.com/Kitzuku)
|
https://github.com/Vector35/scc
|
scc
Languages: C (51.9%), C++ (32.4%), Roff (11.2%), M4 (1.7%), Yacc (0.9%), HTML (0.5%)
buildenv/msys
buildenv/msys
codegen
codegen
docs
docs
runtime
runtime
tests
tests
...
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
AArch64.cgen
AArch64.cgen
Arm.cgen
Arm.cgen
> README.md
# Shellcode Compiler
The Shellcode Compiler started its life as an internal CTF tool before it was re-purposed to be the compiler integrated into Binary Ninja.
With the 5.0 release of [Binary Ninja](https://binary.ninja/), this repository was open-sourced. In the future, it's likely that SCC may be migrated into the main [binaryninja-api](https://github.com/Vector35/binaryninja-api/) repository.
Long-term our plan is to replace scc with a version of LLVM using the appropriate compiler flags for minimal shellcode-style codegen. (We're already embedding multiple copies of LLVM -- one for the type parse and one for the debugger, so this need not be as much of a burden as it might sound.)
Note that scc is not being actively maintained, however pull-requests and [issues](https://github.com/Vector35/binaryninja-api/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22Component%3A%20SCC%22) are welcome.
## Documentation
Online documentation is available at: [https://scc.binary.ninja/](https://scc.binary.ninja/)
## Usage and Build Instructions
The build system uses cmake:
```
$ git clone --recursive https://github.com/vector35/scc
$ cd scc
$ cmake -S . -B build
...
$ cmake --build build
```
## Licensing
Some components may be released under compatible but slightly different open source licenses and should have their own LICENSE file as appropriate.
Remaining components are released under an [MIT](https://github.com/Vector35/scc/blob/dev/LICENSE.txt) license.
|
https://github.com/bvanjoi/bolt-ts
|
bolt-ts
A TypeScript Compiler Implemented in Rust
Languages: Rust (77.3%), TypeScript (19.7%), JavaScript (3.0%)
.github/workflows
.github/workflows
.vscode-template
.vscode-template
crates
crates
helper
helper
tests/cases/compiler
tests/cases/compiler
...
.gitignore
.gitignore
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
README.md
README.md
rust-toolchain.toml
rust-toolchain.toml
> README.md
# bolt-ts
bolt-ts is a TypeScript compiler implemented in Rust. The current implementation heavily leverages code ported from the original TypeScript compiler(tsc).
## Performance
When testing a subset of `type-fest` functionality, bolt-ts demonstrates:
- 2.5× faster than ts-go
- 5× faster than tsc
(Benchmarked on Apple M3 Max with 36GB RAM. See [typescript-compiler-bench](https://github.com/bvanjoi/typescript-compiler-bench) for details)
## Current Status
Core functionalities are operational but require refinement. Key pending improvements include:
- Parser: async function, switch/with stmt.
- Module Resolution: cache, `exports`/`imports` field support, `node_modules/@types` type definition resolution.
- Type Checking: enum implementation and various edge-case bugs.
- Output Generation: sourcemap generation, different module systems.
- And others: js file processing, language service..
|
https://github.com/NVIDIA-RTX/RTXNS
|
RTXNS
NVIDIA Neural Shading SDK
Languages: C++ (60.2%), Slang (30.8%), CMake (9.0%)
assets/data
assets/data
docs
docs
external
external
samples
samples
src
src
...
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
CHANGELOG.md
CHANGELOG.md
CMakeLists.txt
CMakeLists.txt
> README.md
# RTX Neural Shading
RTX Neural Shading (RTXNS) also known as RTX Neural Shaders, is intended as a starting point for developers interested in bringing Machine Learning (ML) to their graphics applications. It provides a number of examples to help the reader understand how to train their own neural networks and then use those models to perform inference alongside their normal graphics rendering.
RTXNS uses the [Slang](https://shader-slang.com) shading language and it utilizes either the DirectX Preview Agility SDK or the Vulkan Cooperative Vectors extension to provide access to the GPUs ML acceleration.
A number of examples are included which build upon each other from a simple inference example to more complex examples showing how to train a neural network to represent a shader or a texture. Helper functions to facilitate building your own neural networks are also included.
Alongside the core samples is a SlangPy sample to demonstrate how to use python and SlangPy for fast iteration and development of neural networks which can then be integrated into RTXNS for inference.
When exploring RTXNS, it is assumed that the reader is already familiar with ML and neural networks.
## Requirements
### General
[CMake v3.24.3][CMake] **|** [VS 2022][VS22] **|** [Slang v2025.10](https://shader-slang.com/tools/)
### DirectX
[DirectX Preview Agility SDK 1.717.0-preview](https://www.nuget.org/packages/Microsoft.Direct3D.D3D12/1.717.0-preview) **|** [Microsoft DXC 1.8.2505.28](https://www.nuget.org/packages/Microsoft.Direct3D.DXC/1.8.2505.28) **|** [Shader Model 6-9-Preview Driver](https://developer.nvidia.com/downloads/shadermodel6-9-preview-driver)
### Vulkan
GPU must support the Vulkan `VK_NV_cooperative_vector` extension (minimum NVIDIA RTX 20XX) **|** [Vulkan SDK 1.3.296.0](https://vulkan.lunarg.com/sdk/home) **|** Public Driver ≥ 572.16
## Known Issues
05/30/2025: When updating from v1.0.0 to v1.1.0 is it recommended to delete the cmake cache to avoid build errors.
## Project structure
| Directory | Details |
| --------------------------------- | -------------------------------------- |
| [/assets](assets) | _Asset files for samples_ |
| [/docs](docs) | _Documentation for showcased tech_ |
| [/samples](samples) | _Samples showcasing usage of MLPs_ |
| [/external/donut](external/donut) | _Framework used for the examples_ |
| [/external](external) | _Helper dependencies for the examples_ |
| [/src](src) | _Helper and utility functions_ |
## Getting started
- [Quick start guide](docs/QuickStart.md) for building and running the neural shading samples.
- [Library usage guide](docs/LibraryGuide.md) for using helper functions
### External Resources
This project uses [Slang](https://shader-slang.com) and the Vulkan CoopVector extensions. The following links provide more detail on these, and other technologies which may help the reader to better understand the relevant technologies, or just to provide further reading.
* [Slang User Guide](https://shader-slang.com/slang/user-guide/)
* [Automatic Differentiation](https://shader-slang.com/slang/user-guide/autodiff.html)
* [SlangPy](https://slangpy.readthedocs.io/en/latest/)
* [Vulkan `VK_NV_cooperative_vector` extension](https://registry.khronos.org/vulkan/specs/latest/man/html/VK_NV_cooperative_vector.html)
* [Donut](https://github.com/NVIDIAGameWorks/donut)
## Contact
RTXNS is actively being developed. Please report any issues directly through the GitHub issue tracker, and for any information or suggestions contact us at rtxns-sdk-support@nvidia.com
## Citation
Use the following BibTex entry to cite the usage of RTXNS in published research:
```bibtex
@online{RTXNS,
title = {{{NVIDIA}}\textregistered{} {RTXNS}},
author = {{NVIDIA}},
year = 2025,
url = {https://github.com/NVIDIA-RTX/RTXNS},
urldate = {2025-02-03},
}
```
## License
See [LICENSE.md](LICENSE.MD)
[VS22]: https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&channel=Release&version=VS2022&source=VSLandingPage&passive=false&cid=2030
[CMake]: https://github.com/Kitware/CMake/releases/download/v3.24.3/cmake-3.24.3-windows-x86_64.msi
|
https://github.com/SamoZ256/hydra
|
hydra
A Nintendo Switch emulator for macOS
Languages: C++ (83.8%), C (12.1%), Swift (1.9%), CMake (1.7%)
.github/workflows
.github/workflows
externals
externals
img
img
res/nx-hbloader
res/nx-hbloader
src
src
...
.clang-format
.clang-format
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
LICENSE.txt
LICENSE.txt
> README.md
# Hydra
Hydra is an experimental Nintendo Switch emulator for macOS.
## Status
The emulator is still in very early stages. A few homebrew apps work perfectly, and some official games get in-game with various degrees of playability.

Only the NRO, NSO and NCA formats are supported. You can extract an NSP file into NCA with [this tool](https://github.com/SamoZ256/switch-extract-macos).
In order to run official games, you will need to download a set of patches to prevent crashes. You can get the patches together with a guide on how to install them [here](https://github.com/SamoZ256/hydra-patches).
## Usage
### Dependencies
You can install Hydra dependencies with a package manager of your choice, like `brew`.
```sh
brew install cmake ninja sdl3 fmt
```
### Building
First, clone the repository and update submodules.
```sh
git clone https://github.com/SamoZ256/hydra.git
cd hydra
git submodule update --init --recursive
```
Now configure CMake and build with Ninja.
```sh
cmake . -B build -G Ninja -DMACOS_BUNDLE=ON
ninja -C build
```
If you want to use the SwiftUI frontend instead of SDL3, you can use the `-DFRONTEND=SwiftUI` option.
### Running
If you built a macOS bundle, you will find a macOS app at `build/bin/Hydra.app`. Otherwise, you can run the emulator with the following command:
```sh
build/bin/hydra
```
For SDL3, you can drag and drop a ROM into the window or provide path to the ROM as an argument when launching the emulator.
### Configuring
You can find a config file at `/Users/USER/Library/Application Support/Hydra/config.toml` after launching the emulator at least once.
|
https://github.com/pRain1337/plouton
|
plouton
System Management Mode (SMM) game cheating framework
Languages: C (89.6%), DenizenScript (10.1%)
Plouton-UEFI
Plouton-UEFI
images
images
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE.md
LICENSE.md
README.md
README.md
> README.md
# Plouton - a System Management Mode (SMM) cheat framework
<p align="center">
<img src="/images/logo_plouton.jpg" alt="Picture of Plouton" width="600">
</p>
*Plouton was the master of the underworld, and thus he was able to command the riches of the earth and the souls of the dead.*
Plouton is a System Management Mode (SMM) (ring-2, *"underworld"*) PC game cheat framework.
This repository and code were created as proof-of-concept and released as open source, we do not take any responsibility for the further usage of this project.
Check out this [video demonstration](https://www.youtube.com/watch?v=HoLtvFKOZzY) of Plouton's CS2 cheat implementation.
# Supported platforms
Plouton supports only Intel-based systems, for AMD-based systems some importants features (e.g. XHCI generating SMIs on USB Events) are not available). The core functionality and memory code would still be applicable, and could be reused.
The core has been tested on Intel generations from Series 200 (Skylake, Kaby Lake) up to Series 700 (Alder Lake, Raptor Lake).
According to the offsets in the Intel Chipset datasheet for the Series 800, it should also be supported but has not been tested.
# Building
See [Plouton-UEFI](Plouton-UEFI)
# Extending
To extend Plouton to support your game of choice, see [targets](Plouton-UEFI/Plouton/target/)
To extend Plouton to support your hardware (mouse, audio devices), see [hardware](Plouton-UEFI/Plouton/hardware/)
To extend for other OS than Windows, sorry, this is not currently possible :-)
Contributions are welcome!
|
https://github.com/Lagrange-Labs/deep-prove
|
deep-prove
Framework to prove inference of ML models blazingly fast
Languages: Rust (94.6%), Python (5.2%)
.github
.github
deep-prove
deep-prove
docker
docker
docs
docs
ff_ext
ff_ext
...
.envrc
.envrc
.gitignore
.gitignore
.gitmodules
.gitmodules
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
> README.md
# 🚀 DeepProve: Zero-Knowledge Machine Learning (zkml) Inference
Welcome to **DeepProve**, a cutting-edge framework designed to prove neural network inference using zero-knowledge cryptographic techniques. Whether you're working with Multi-Layer Perceptrons (MLPs) or Convolutional Neural Networks (CNNs), DeepProve offers a fast and efficient way to verify computations without revealing the underlying data.
zkml is the name of the subcrate implementing the proving logic.
## 🤔 What Does DeepProve Do?
DeepProve leverages advanced cryptographic methods like sumchecks and logup GKR to achieve sublinear proving times. This means you can prove the correctness of your model's inference faster than ever before!
### 📊 Benchmark Highlights
CNN 264k: This runs a CNN on the cifar 10 dataset for a total of 264k parameters. DeepProve is proving 158x faster at this size!
Dense 4M: This runs a multiple dense layers for a total of 4 million parameters. DeepProve is proving 54x faster at this size!
| Model Type | ZKML Proving Time (ms) | ZKML Verification Time (ms) | EZKL Proving Time (ms) | EZKL Verification Time (ms) |
|------------|------------------------|-----------------------------|------------------------|-----------------------------|
| CNN 264k | 1242 | 599 | 196567.01 | 312505 |
| Dense 4M | 2335 | 520 | 126831.3 | 1112 |
## 📜 Licensing
- **zkml folder**: Licensed under the [Lagrange License](https://github.com/Lagrange-Labs/deep-prove/blob/master/zkml/LICENSE), unless otherwise specified.
- **Rest of the Code**: Licensed under Apache 2.0 + MIT, as per the original repository.
## 🌟 Use Cases
Proving inference of AI models has a wide range of applications, especially in scenarios where privacy and trust are paramount. For instance, in healthcare, sensitive patient data can be used to make predictions without exposing the data itself. In finance, models can be verified for compliance without revealing proprietary algorithms. Additionally, in decentralized applications, zero-knowledge proofs can ensure the integrity of AI computations on the blockchain, fostering trust and transparency. These use cases highlight the transformative potential of ZKML in various industries.
## 🙏 Acknowledgements
This project builds upon the work from scroll-tech/ceno, reusing the sumcheck and GKR implementation from their codebase. Check out their work at [scroll-tech/ceno](https://github.com/scroll-tech/ceno).
For more technical details and usage instructions, dive into the [ZKML README](zkml/README.md).
Happy proving! 🎉
|
https://github.com/tpde2/tpde
|
tpde
A fast framework for writing baseline compiler back-ends in C++
Languages: LLVM (73.5%), C++ (24.7%), C (1.0%), CMake (0.4%), Python (0.4%), Shell (0.0%)
.github/workflows
.github/workflows
LICENSES
LICENSES
deps
deps
docs
docs
tpde-encodegen
tpde-encodegen
...
.clang-format
.clang-format
.gdbinit
.gdbinit
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
> README.md
# TPDE Compiler Back-End Framework
TPDE is a fast compiler back-end framework that adapts to existing SSA IRs.
The primary goal is low-latency compilation while maintaining reasonable (`-O0`) code quality, e.g., as baseline compiler for JIT compilation or unoptimized builds.
Currently, TPDE only targets ELF-based x86-64 and AArch64 (Armv8.1) platforms.
This repository contains:
- TPDE: the core compiler framework.
- TPDE-Encodegen: a utility for easing the use of TPDE by deriving code generators through LLVM's Machine IR.
- TPDE-LLVM: a standalone back-end for LLVM-IR, which compiles 10--20x faster than LLVM -O0 with similar code quality, usable as library (e.g., for JIT), as tool (`tpde-llc`), and integrated in Clang/Flang (with a patch).
For more information and getting started, consult the [documentation](https://docs.tpde.org/).
### Publications
- Tobias Schwarz, Tobias Kamm, and Alexis Engelke. TPDE: A Fast Adaptable Compiler Back-End Framework. [arXiv:2505.22610](https://arxiv.org/abs/2505.22610) [cs.PL]. 2025.
### License
Generally: Apache-2.0 WITH LLVM-exception. (Detailed license information is attached to every file. Dependencies may have different licenses.)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 11