url
stringlengths 24
101
| description
stringlengths 169
383k
|
---|---|
https://github.com/Idov31/NovaHypervisor
|
NovaHypervisor
NovaHypervisor is a defensive x64 Intel host based hypervisor. The goal of this project is to protect against kernel based attacks (either via Bring Your Own Vulnerable Driver (BYOVD) or other means) by safeguarding defense products (AntiVirus / Endpoint Protection) and kernel memory structures and preventing unauthorized access to kernel memory.
Languages: C++ (91.1%), Assembly (5.1%), C (3.8%)
Images
Images
NovaClient
NovaClient
NovaHypervisor
NovaHypervisor
...
.gitattributes
.gitattributes
.gitignore
.gitignore
LICENSE.txt
LICENSE.txt
NovaHypervisor.sln
NovaHypervisor.sln
README.md
README.md
> README.md
# NovaHypervisor
<p align="center">
<img alt="Logo" src="./Images/logo_transparent.png" width="400" height="400">
</p>
  
## Description
NovaHypervisor is a defensive x64 Intel host based hypervisor. The goal of this project is to protect against kernel based attacks (either via Bring Your Own Vulnerable Driver (BYOVD) or other means) by safeguarding defense products (AntiVirus / Endpoint Protection) and kernel memory structures and preventing unauthorized access to kernel memory.
NovaHypervisor is written in C++ and Assembly, and is designed to be compatible with Hyper-V and run on Windows 10 and later versions. Please see the [setup](#setup) section for more information on how to use it.
> [!WARNING]
> This project is in a very early stage of development and is not yet ready for production use. It is intended for educational purposes and to demonstrate the concepts of a defensive hypervisor.
> The project has been tested on the latest Windows 10, and while it should work on Windows 11, it has not been tested on that version yet.
## Usage
To use the NovaHypervisor, you will need to create a kernel service and start it:
```cmd
sc create NovaHypervisor type= kernel binPath= "C:\Path\To\NovaHypervisor.sys"
sc start NovaHypervisor
```
Then, you can add and remove the addresses that you want to protect using the [NovaClient](./NovaClient/) application:
```cmd
REM Add an address to protect
NovaClient.exe protect 0x12345678 <r|w|x> <execution hook>
REM Remove an address from protection
NovaClient.exe unprotect 0x12345678
```
- protect: Protect a memory address from being accessed, you can specify the type of protection:
- `r`: Read protection
- `w`: Write protection
- `x`: Execute protection
The protection that you give is the protection that the address will **have**. For example, if you want to remove execute privileges, do "rw".
- unprotect: Remove protection from a memory address.
> [!NOTE]
> Execution hook via inline hook + EPT hooks are not supported and will not be supported for this project to prevent abuse.
## Setup
### Compiling the Project
The setup to compile the project requires you to have:
- Visual Studio 2022 or later.
- Windows Driver Kit (WDK) installed.
### Target setup
To run the hypervisor, you will need to have a Windows 10 or later version installed on your machine. You will also need to have:
- Intel VT-x enabled.
- Virtualized IOMMU.
## Logging and Debugging
### Logging
NovaHypervisor uses [WPP](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/wpp-software-tracing) logging as it provides easy to use interface that works also in VMX root. To be able to see the logs, make sure to create a trace session once:
```cmd
logman create trace "NovaHypervisorLogs" -p {e74c1035-77d4-4c5b-9088-77056fae3aa3} 0xffffffff 0xff -o C:\Path\To\NovaHypervisor.etl
```
Later on, whenever you want to start or end the logging session you can use:
```cmd
logman start "NovaHypervisorLogs"
logman stop "NovaHypervisorLogs"
```
To view the logs you can use tools such as [TraceView](https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/traceview).
### Debugging
To test and debug it in your testing environment run those commands with elevated cmd and then restart your machine:
```cmd
bcdedit /set testsigning on
bcdedit /debug on
bcdedit /dbgsettings net hostip:<HOSTIP> port:55000 key:1.2.3.4
```
Where `<HOSTIP>` is the IP address of your host machine.
## Resources
[Hypervisor From Scratch](https://rayanfam.com/topics/hypervisor-from-scratch-part-1/)
[HyperDbg](https://github.com/HyperDbg/HyperDbg)
## Personal Thanks & Contributors
- [Sinaei](https://x.com/Intel80x86): For his help with answering questions I had and for his amazing work on HyperDbg and Hypervisor From Scratch.
- [memN0ps](https://github.com/memN0ps/): For his help with answering questions I had and pointing me to the right resources.
|
https://github.com/salykova/sgemm.cu
|
sgemm.cu
High-Performance SGEMM on CUDA devices
Languages: Cuda (79.8%), C++ (9.3%), C (5.0%), Shell (2.8%), CMake (1.7%), Python (1.4%)
assets
assets
common
common
scripts
scripts
src
src
...
.clang-format
.clang-format
.clangd
.clangd
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
> README.md
# High-Performance SGEMM on NVIDIA GPUs
> **Important note:** while the implementation is expected to be high-performant on all Ada/Ampere/Volta/Turing devices, it was specifically fine-tuned for and tested on NVIDIA RTX 3090 (GA102 chip - RTX 3080, A10, A40, A6000).
## Benchmark
>Avoid using WSL for performance measurements. To ensure accurate and reliable results, please use a native Linux environment.
To benchmark the code, specify compute capability of your CUDA device and run `benchmark.sh`. For example, on RTX 3090:
```bash
bash benchmark.sh 86
```
The benchmark settings such as minimum/maximum matrix sizes, step size, number of warm-up iterations etc. can be adjusted in the `benchmark.sh` file.
To visualize benchmark results, please install `matplotlib` and run
```bash
python plot_benchmark_data.py benchmark_results
```
## Tests
Use `test.sh` to test the implementation for correctness. For example, on RTX 3090:
```bash
bash test.sh 86
```
## Performance
Test environment:
- OS: Ubuntu 24.04.1 LTS
- GPU: NVIDIA RTX 3090
- Driver Version: 550.120
- CUDA Driver: 12.4, CUDA Runtime: 12.6, V12.6.85
- CMake 3.28.3
- g++ 13.3
<p align="center">
<img src="assets/perf.png" alt="perf" width="85%">
</p>
<p align="center">
<img src="assets/perf_locked.png" alt="perf" width="85%">
</p>
|
https://github.com/facebook/jemalloc
|
jemalloc
Meta fork of the OG Jemalloc project
Languages:
.github/workflows
.github/workflows
bin
bin
build-aux
build-aux
doc
doc
doc_internal
doc_internal
...
.appveyor.yml
.appveyor.yml
.autom4te.cfg
.autom4te.cfg
.cirrus.yml
.cirrus.yml
.clang-format
.clang-format
.git-blame-ignore-revs
.git-blame-ignore-revs
<no readme found>
|
https://github.com/OpenBMB/CPM.cu
|
CPM.cu
CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge techniques in sparse architecture, speculative sampling and quantization.
Languages: Cuda (49.7%), C++ (29.6%), Python (20.7%)
.github/ISSUE_TEMPLATE
.github/ISSUE_TEMPLATE
cpmcu
cpmcu
examples
examples
scripts
scripts
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE
LICENSE
README.md
README.md
README_ZH.md
README_ZH.md
> README.md
# CPM.cu
<strong>[中文版本](./README_ZH.md) | English</strong>
CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge techniques in **sparse architecture**, **speculative sampling** and **quantization**.
<div id="news"></div>
## 🔥 Project Updates
- [2025.06.06] Optimized for [MiniCPM4](https://github.com/openbmb/minicpm).
- Support InfLLM-v2 attention kernel
- Support sliding-window for the MTP layer, optimized for long context
- Support quantization for the MTP layer
- [2025.05.29] Support Quantization at [SpecMQuant](https://github.com/AI9Stars/SpecMQuant).
- Support Marlin GPTQ kernel for the LLM
- Support Speculative Sampling for quantized LLM
- [2025.03.01] Release the first version at [FR-Spec](https://github.com/thunlp/FR-Spec).
- SOTA Speculative Sampling Implementation
- Support FR-Spec: Frequency-Ranked Speculative Sampling
- Support Tree-based verification of Speculative Sampling in Flash-Attention
- Support Static memory management and memory reuse
- Support Fused kernels
- Support Chunked prefill
- Support CUDA Graph
<div id="demo"></div>
## Demo
https://github.com/user-attachments/assets/ab36fd7a-485b-4707-b72f-b80b5c43d024
<div id="getstart"></div>
## Getting Started
- [Installation](#install)
- [Model Weights](#modelweights)
- [Quick Start](#example)
- [OpenAI API Server](#openai-api)
<div id="install"></div>
## Installation
### Install from source
This library's build depends on torch and ninja. Please install both before installing this library.
```bash
git clone https://github.com/OpenBMB/CPM.cu.git --recursive
cd CPM.cu
pip install .
```
If you encounter installation issues, please follow the error messages to resolve them or create a GitHub issue. You can use `python setup.py --help-config` to view more installation configuration options.
<div id="modelweights"></div>
## Prepare Model
Please follow [MiniCPM4's README](https://github.com/openbmb/minicpm) to download the model weights.
<div id="example"></div>
## Quick Start
We provide a simple example to show how to use CPM.cu to generate text.
```bash
cd examples
python3 minicpm4/test_generate.py --prompt-file <your prompt file>
```
If you don't specify the model path, the scripts will load the model from OpenBMB's Hugging Face repository.
If you want to use local paths, we recommend keeping all model filenames unchanged and placing them in the same directory. This way, you can run the model by specifying the directory with the -p parameter. Otherwise, we suggest modifying the paths in the code accordingly.
You can use --help to learn more about the script's features.
We also provide a script, `examples/long_prompt_gen.py`, to generate long code summarization.
This script automatically collects code from this repository and prompts the model to "Summarize the code."
```bash
cd examples
python3 long_prompt_gen.py # generate prompt.txt (for more details, use --help)
python3 minicpm4/test_generate.py --prompt-file ../prompt.txt
```
The output should be of the following format:
```bash
Generated text (streaming output):
--------------------------------------------------
Prefilling: 100.0% (106850/106850 tokens) @ 6565.3 tokens/s - Complete!
<Generated Output HERE>
==================================================
Stream Generation Summary:
==================================================
Prefill length: 106850
Prefill time: 16.36 s
Prefill tokens/s: 6530.77
Mean accept length: 2.50
Decode length: 118
Decode time: 0.76 s
Decode tokens/s: 154.59
```
Where:
- the `Prefill` and `Decode` speed are output by (length, time and token/s).
- the `Mean accept length` is the average length of the accepted tokens when using Speculative Sampling.
<div id="openai-api"></div>
## OpenAI API Server (experimental)
Start the OpenAI-compatible API server (same args as `examples/minicpm4/test_generate.py`):
```bash
cd examples
python minicpm4/start_server.py [options]
```
Test the API (supports streaming and non-streaming modes):
```bash
cd examples
python test_openai_api.py [--no-stream]
```
Only `/v1/chat/completions` is supported and the `model` field is ignored.
## Code Structure
```bash
CPM.cu/
├── src/
│ ├── flash_attn/ # attention kernels: sparse, tree-verification, etc.
│ ├── model/
│ │ ├── minicpm4/ # minicpm4 model
│ │ ├── w4a16_gptq_marlin/ # marlin kernel
│ │ └── ... # common layers
│ ├── entry.cu # pybind: bind cuda and python
│ └── ...
├── cpmcu/ # python interface
└── ...
```
## More
### Word Frequency File Generation
We provide a word frequency generation script for FR-Spec, located at "scripts/fr_spec/gen_fr_index.py". You can run it as follows:
```bash
python scripts/fr_spec/gen_fr_index.py --model_path <your_model_path>
```
You can modify the code to use your own dataset. If your task is in a specific vertical domain, constructing word frequencies tailored to that domain can significantly improve processing speed.
### GPTQ to Marlin Conversion
We provide a script to convert GPTQ-quantized model to Marlin format, located at "scripts/model_convert/gptq2marlin.py". You can run it as follows:
```bash
python scripts/model_convert/gptq2marlin.py \
--src <gptq_model_path> \
--dst <marlin_model_path>
```
This script supports MiniCPM, Llama and EAGLE format. It will automatically detect the model type and perform the appropriate conversion.
## Acknowledgments
Our `src/flash_attn` folder modified based on [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/v2.6.3/csrc/flash_attn).
We have drawn inspiration from the following repositories:
- [EAGLE](https://github.com/SafeAILab/EAGLE)
- [Block-Sparse-Attention](https://github.com/mit-han-lab/Block-Sparse-Attention)
- [vLLM](https://github.com/vllm-project/vllm)
- [SGLang](https://github.com/sgl-project/sglang)
## Citation
Please cite our paper if you find our work valuable.
```
@article{zhao2025fr,
title={FR-Spec: Accelerating Large-Vocabulary Language Models via Frequency-Ranked Speculative Sampling},
author={Zhao, Weilin and Pan, Tengyu and Han, Xu and Zhang, Yudi and Sun, Ao and Huang, Yuxiang and Zhang, Kaihuo and Zhao, Weilun and Li, Yuxuan and Wang, Jianyong and others},
journal={arXiv preprint arXiv:2502.14856},
year={2025}
}
@article{zhang2025specmqaunt,
title={Speculative Decoding Meets Quantization: Compatibility Evaluation and Hierarchical Framework Design},
author={Zhang, Yudi and Zhao, Weilin and Han, Xu and Zhao, Tiejun and Xu, Wang and Cao, Hailong and Zhu, Conghui},
journal={arXiv preprint arXiv:2505.22179},
year={2025}
}
@article{minicpm4,
title={MiniCPM4: Ultra-Efficient LLMs on End Devices},
author={MiniCPM},
year={2025}
}
|
https://github.com/amd/MxGPU-Virtualization
|
MxGPU-Virtualization
Languages: C (98.5%), C++ (1.0%), Python (0.3%), CSS (0.1%), Makefile (0.1%), M4 (0.0%)
dkms
dkms
gim-coms-lib
gim-coms-lib
gim_shim
gim_shim
libgv
libgv
package
package
...
.gitignore
.gitignore
Kconfig
Kconfig
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
> README.md
# GIM
## What is GIM?
[GIM](https://github.com/amd/MxGPU-Virtualization#) (GPU-IOV Module) is a Linux kernel module for AMD SR-IOV based HW Virtualization (MxGPU) product. It supports KVM based hypervisors with necessary kernel compatibility layer. GIM is reponsible for:
* GPU IOV initialization
* Virtual function configuration and enablement
* GPU scheduling for world switch
* Hang detection and virtual function level reset (FLR)
* PF/VF hand shake and other GPU utilities.
## DOCUMENTATION:
Please check out our [User Guide](https://instinct.docs.amd.com/projects/virt-drv/en/latest/) for instructions on how to set up GIM and example configurations to run SR-IOV enabled VMs.
## Hardware/Features supported:
Please check the latest [release note](https://github.com/amd/MxGPU-Virtualization/releases).
|
https://github.com/ramseymcgrath/PCILeechFWGenerator
|
PCILeechFWGenerator
Automatically generates custom pcileech firmware from real pcie devices. Supports behavior inspection, advanced customization options and multiple profiles.
Languages: Python (73.9%), Jinja (9.6%), HTML (8.7%), SystemVerilog (5.3%), Shell (1.2%), C (0.6%)
.github
.github
.vscode
.vscode
_site
_site
boards
boards
configs/devices
configs/devices
...
.coveragerc
.coveragerc
.dockerignore
.dockerignore
.gitignore
.gitignore
.pre-commit-config.yaml
.pre-commit-config.yaml
.readthedocs.yml
.readthedocs.yml
> README.md
# PCILeech Firmware Generator
[](https://github.com/ramseymcgrath/PCILeechFWGenerator/actions)
[](https://codecov.io/gh/ramseymcgrath/PCILeechFWGenerator)

Generate authentic PCIe DMA firmware from real donor hardware with a single command. This tool extracts donor configurations from a local device and generates unique PCILeech FPGA bitstreams (and optionally flashes a DMA card over USB-JTAG).
> [!WARNING]
> This tool requires *real* hardware. The templates are built using the device identifiers directly from a donor card and placeholder values are explicitly avoided. Using your own donor device ensures your firmware will be unique.
## ✨ Key Features
- **Donor Hardware Analysis**: Extract real PCIe device configurations and register maps from live hardware via VFIO
- **Dynamic Device Capabilities**: Generate realistic network, storage, media, and USB controller capabilities with pattern-based analysis
- **Full 4KB Config-Space Shadow**: Complete configuration space emulation with BRAM-based overlay memory
- **MSI-X Table Replication**: Exact replication of MSI-X tables from donor devices with interrupt delivery logic
- **Deterministic Variance Seeding**: Consistent hardware variance based on device serial number for unique firmware
- **Advanced SystemVerilog Generation**: Comprehensive PCIe device controller with modular template architecture
- **Active Device Interrupts**: MSI-X interrupt controller with timer-based and event-driven interrupt generation
- **Memory Overlay Mapping**: BAR dispatcher with configurable memory regions and custom PIO windows
- **Interactive TUI**: Modern Textual-based interface with real-time device monitoring and guided workflows
- **Containerized Build Pipeline**: Podman-based synthesis environment with automated VFIO setup
- **Automated Testing and Validation**: Comprehensive test suite with SystemVerilog assertions and Python unit tests
- **USB-JTAG Flashing**: Direct firmware deployment to DMA boards via integrated flash utilities
📚 **[Complete Documentation](https://pcileechfwgenerator.ramseymcgrath.com)** | 🏗️ **[Device Cloning Guide](https://pcileechfwgenerator.ramseymcgrath.com/device-cloning)** | ⚡ **[Dynamic Capabilities](https://pcileechfwgenerator.ramseymcgrath.com/dynamic-device-capabilities)** | 🔧 **[Development Setup](https://pcileechfwgenerator.ramseymcgrath.com/development)**
## 🚀 Quick Start
### Installation
```bash
# Install with TUI support (recommended)
pip install pcileechfwgenerator[tui]
# Load required kernel modules
sudo modprobe vfio vfio-pci
```
### Requirements
- **Python ≥ 3.9**
- **Donor PCIe card** (any inexpensive NIC, sound, or capture card)
- **Linux OS** (You need this)
### Optional Requirements
- **Podman** (_not Docker_ - required for proper PCIe device mounting) You use podman or run the python locally. *You must use linux for either option
- **DMA board** (pcileech_75t484_x1, pcileech_35t325_x4, or pcileech_100t484_x1) You don't need to flash your firmware with this tooling but you can.
- **Vivado Studio** (2022.2+ for synthesis and bitstream generation) You can use a locally generated Vivado project or insert the files into an existing one.
### Basic Usage
```bash
# Interactive TUI (recommended for first-time users)
sudo python3 pcileech.py tui
# CLI interface for scripted builds
sudo python3 pcileech.py build --bdf 0000:03:00.0 --board pcileech_35t325_x1
# CLI build with custom Vivado settings
sudo python3 pcileech.py build --bdf 0000:03:00.0 --board pcileech_35t325_x1 \
--vivado-path /tools/Xilinx/2025.1/Vivado --vivado-jobs 8 --vivado-timeout 7200
# Check VFIO configuration
sudo python3 pcileech.py check --device 0000:03:00.0
# Flash firmware to device
sudo python3 pcileech.py flash output/firmware.bin
```
> [!NOTE]
> The legacy entrypoint has been removed, please see the steps above and update your scripts accordingly
### Development from Repository
```bash
git clone https://github.com/ramseymcgrath/PCILeechFWGenerator.git
cd PCILeechFWGenerator
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
sudo -E python3 pcileech.py tui
```
## 🔧 Troubleshooting
### VFIO Setup Issues
> [!WARNING]
> Avoid using on-board devices (audio, graphics cards) for donor info. The VFIO process can lock the bus during extraction and cause system reboots.
The most common issues involve VFIO (Virtual Function I/O) configuration. Use the built-in diagnostic tool:
```bash
# Check VFIO setup and device compatibility
sudo python3 pcileech.py check
# Check a specific device
sudo python3 pcileech.py check --device 0000:03:00.0
# Interactive mode with guided fixes
sudo python3 pcileech.py check --interactive
# Attempt automatic fixes
sudo python3 pcileech.py check --fix
```
### Common VFIO Problems
**1. IOMMU not enabled in BIOS/UEFI**
```bash
# Enable VT-d (Intel) or AMD-Vi (AMD) in BIOS settings
# Then add to /etc/default/grub GRUB_CMDLINE_LINUX:
# For Intel: intel_iommu=on
# For AMD: amd_iommu=on
sudo update-grub && sudo reboot
```
**2. VFIO modules not loaded**
```bash
sudo modprobe vfio vfio_pci vfio_iommu_type1
```
**3. Device not in IOMMU group**
```bash
# Check IOMMU groups
find /sys/kernel/iommu_groups/ -name '*' -type l | grep YOUR_DEVICE_BDF
```
**4. Permission issues**
```bash
# Add user to required groups
sudo usermod -a -G vfio $USER
sudo usermod -a -G dialout $USER # For USB-JTAG access
```
### Installation Issues
```bash
# If pip installation fails
pip install --upgrade pip setuptools wheel
pip install pcileechfwgenerator[tui]
# For TUI dependencies
pip install textual rich psutil watchdog
# Container issues
podman --version
podman info | grep rootless
```
> [!NOTE]
> If you run into issues with your vivado project file formatting, first clear out all your cached files and rerun. Otherwise try pulling a copy of the pcileech repo directly and then inserting the generator output in.
## 📚 Documentation
For detailed information, please visit our **[Documentation Site](https://pcileechfwgenerator.ramseymcgrath.com)**:
- **[Device Cloning Process](https://pcileechfwgenerator.ramseymcgrath.com/device-cloning)** - Complete guide to the cloning workflow
- **[Firmware Uniqueness](https://pcileechfwgenerator.ramseymcgrath.com/firmware-uniqueness)** - How authenticity is achieved
- **[Manual Donor Dump](https://pcileechfwgenerator.ramseymcgrath.com/manual-donor-dump)** - Step-by-step manual extraction
- **[Development Setup](https://pcileechfwgenerator.ramseymcgrath.com/development)** - Contributing and development guide
- **[TUI Documentation](https://pcileechfwgenerator.ramseymcgrath.com/tui-readme)** - Interactive interface guide
- **[Config space info](https://pcileechfwgenerator.ramseymcgrath.com/config-space-shadow)** - Config space shadow info
## 🧹 Cleanup & Safety
- **Rebind donors**: Use TUI/CLI to rebind donor devices to original drivers
- **Keep firmware private**: Generated firmware contains real device identifiers
- **Use isolated build environments**: Never build on production systems
- **Container cleanup**: `podman rmi pcileechfwgenerator:latest`
> [!IMPORTANT]
> This tool is intended for educational research and legitimate PCIe development purposes only. Users are responsible for ensuring compliance with all applicable laws and regulations. The authors assume no liability for misuse of this software.
## 🏆 Acknowledgments
- **PCILeech Community**: For feedback and contributions
- @Simonrak for the writemask implementation
## 📄 License
This project is licensed under the Apache License - see the [LICENSE](LICENSE) file for details.
## ⚠️ Legal Notice
*AGAIN* This tool is intended for educational research and legitimate PCIe development purposes only. Users are responsible for ensuring compliance with all applicable laws and regulations. The authors assume no liability for misuse of this software.
**Security Considerations:**
- Never build firmware on systems used for production or sensitive operations
- Use isolated build environments (Seperate dedicated hardware)
- Keep generated firmware private and secure
- Follow responsible disclosure practices for any security research
- Use the SECURITY.md template to raise security concerns
---
|
https://github.com/allkern/iris
|
iris
Experimental PlayStation 2 emulator
Languages: C (61.5%), C++ (37.7%)
.github
.github
frontend
frontend
res
res
shaders
shaders
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
AppImage.cmake
AppImage.cmake
CMakeLists.txt
CMakeLists.txt
Info.plist
Info.plist
> README.md
<div align="center" text-align="center" width="100%">
<img width="55%" src="https://github.com/user-attachments/assets/d59e2d95-5791-4497-9985-442ca5115ac6">
</div>
# 🐣 Iris
Experimental Sony PlayStation 2 emulator and debugger
## Screenshots
<div align="center" class="grid" markdown>
<img width="45%" src="https://github.com/user-attachments/assets/39106951-9d45-484f-b4ae-13197305bf06"/>
<img width="45%" src="https://github.com/user-attachments/assets/e7d24d24-ccac-4239-baba-80d880db35bf"/>
<img width="45%" src="https://github.com/user-attachments/assets/3d2499fd-304e-4f2c-a1ce-677912f13753"/>
<img width="45%" src="https://github.com/user-attachments/assets/de37505e-efea-4d3a-94fe-3438b2e9722b"/>
<img width="45%" src="https://github.com/user-attachments/assets/d97b16fe-f59f-4174-97eb-f4dadf4c4df0"/>
<img width="45%" src="https://github.com/user-attachments/assets/f061db57-96f3-4fad-94ea-8b023a5875ad"/>
<img width="45%" src="https://github.com/user-attachments/assets/5ac202f5-eb74-493f-bb35-c6acf752a50b"/>
<img width="45%" src="https://github.com/user-attachments/assets/099ddda9-4f7f-4d8d-8071-40741bbd3bfc"/>
</div>
## Usage
> [!WARNING]
> This emulator is under development, most games WILL run at very low/unplayable framerates.
Iris has a graphical user interface and also supports launching from the command line:
```
Usage: iris [OPTION]... <path-to-disc-image>
-b, --bios Specify a PlayStation 2 BIOS dump file
--rom1 Specify a DVD player dump file
--rom2 Specify a ROM2 dump file
-d, --boot Specify a direct kernel boot path
-i, --disc Specify a path to a disc image file
-x, --executable Specify a path to an ELF executable to be
loaded on system startup
--slot1 Specify a path to a memory card file to
be inserted on slot 1
--slot2 Specify a path to a memory card file to
be inserted on slot 2
-h, --help Display this help and exit
-v, --version Output version information and exit
```
Launching a game or executable through the GUI is also very easy, you can either go to Iris > Open... and pick a disc image or ELF executable, or just drop a file into Iris' window to launch it!
## Building
> [!WARNING]
> Building requires CMake on all supported platforms
### Linux
Building on Linux requires installing SDL3 dependencies and FUSE if you wish to generate AppImages.
```
sudo apt update
sudo apt upgrade
sudo add-apt-repository universe
sudo apt-get install build-essential git make \
pkg-config cmake ninja-build gnome-desktop-testing libasound2-dev libpulse-dev \
libaudio-dev libjack-dev libsndio-dev libx11-dev libxext-dev \
libxrandr-dev libxcursor-dev libxfixes-dev libxi-dev libxss-dev libxtst-dev \
libxkbcommon-dev libdrm-dev libgbm-dev libgl1-mesa-dev libgles2-mesa-dev \
libegl1-mesa-dev libdbus-1-dev libibus-1.0-dev libudev-dev libfuse2t64
```
Then just clone the repository and run CMake:
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build
cmake --build build -j8
```
Optionally run `cmake --install build` to generate an AppImage.
### Windows
We currently only support GCC as a compiler on Windows, this is because MSVC doesn't have an inline assembler, which we need to embed resources into the executable. This might eventually be fixed though!
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build -G "MinGW Makefiles"
cmake --build build -j8
```
### macOS
Iris finally got working macOS builds!
```
git clone https://github.com/allkern/iris --recursive
cd iris
cmake -S . -B build
cmake --build build -j8
```
Optionally run `sudo cmake --install build` to generate a macOS App Bundle
## Progress
### Commercial games
Booting a small number of commercial games in-game, and a slightly bigger set of games can boot to the title screen. Most of them do nothing though, an the ones that do usually run way too slow to be playable.
### BIOS
Pretty much all BIOSes I've tried work just fine, even some obscure ones like the Chinese BIOS and the PSX DESR BIOS (more on this later).
It is also possible to specify paths to ROM1 (DVD player) and ROM2 (Chinese extensions, required for the Chinese BIOS).
## PSX DESR
Support for the PSX DESR console is early but somewhat functional. The DESR BIOS plays the boot animation but later fails some sort of diagnostic test. The DESR requires Flash, ATA and MagicGate emulation, which Iris doesn't yet support.
Booting to the XMB should be possible once these features are implemented, and is one of my medium-term goals for this project.
If you want to try it for yourself, you need to dump the BIOS out of your PSX console, then just clone the `desr` branch, build the emulator and set up the BIOS, ROM1 and ROM2 dumps in Settings > BIOS, or through the command line.
# Special thanks and acknowledgements
I would like to thank the emudev Discord server, Ziemas, Nelson (ncarrillo), cakehonolulu, PSI-rockin, noumi and the PCSX2 team for their kind support.
This project makes use of ImGui, gl3w, toml++, Portable File Dialogs and stb_image
### Components
This console is significantly more complex compared to the PS1, here's a rough list of components:
```
🟡 EE (R5900) CPU
- 🟡 FPU
- 🟡 MMI (SIMD)
- 🟡 TLB
- 🟡 DMAC
- 🟢 INTC
- 🟡 Timers
- 🟢 GIF
- 🟡 GS
- 🟡 VU0
= 🟡 Macro mode
= 🟡 Micro mode
= 🟡 VIF0
- 🟡 VU1 (always micro mode)
= 🟡 VIF1
- 🟡 IPU
🟢 IOP (R3000) CPU
- 🟡 DMAC
- 🟢 INTC
- 🟡 Timers
- 🟢 CDVD
- 🟢 SIO2 (controllers and Memory Cards)
- 🟢 SPU2
- 🟡 DEV9
- 🟡 USB/FireWire?
- 🔴 Ethernet
- 🔴 PS1 backcompat (PS1 hardware)
🟢 SIF
```
|
https://github.com/x86matthew/WinVisor
|
WinVisor
WinVisor - A hypervisor-based emulator for Windows x64 user-mode executables using Windows Hypervisor Platform API
Languages: C++ (73.9%), C (26.1%)
Common
Common
WinVisor
WinVisor
WinVisorDLL
WinVisorDLL
x64/release
x64/release
...
LICENSE
LICENSE
README.md
README.md
WinVisor.sln
WinVisor.sln
winvisor_screenshot.png
winvisor_screenshot.png
> README.md
# WinVisor
## Overview
In Windows 10 (version RS4), Microsoft introduced the Windows Hypervisor Platform (WHP) API. This API exposes Microsoft's built-in hypervisor functionality to user-mode Windows applications. In 2024, I used this API to create another project: a 16-bit MS-DOS emulator called DOSVisor. This project takes the concept further, and allows Windows x64 executables to be emulated within a virtualized environment.
The WHP API allows applications to create a virtual CPU, and map virtual memory from the host process directly into the guest's physical memory. The emulator uses this functionality to build a virtual environment which contains everything needed to execute a Windows user-mode process. This involves building up the memory space within the guest, including mapping the target executable and all DLL dependencies, followed by populating other internal data structures such as the `PEB`, `TEB`, `KUSER_SHARED_DATA`, etc.
Mapping the EXE and DLL dependencies into memory is a simple task, but accurately maintaining internal structures, such as the PEB, is more complex. These structures are large, mostly undocumented, and their contents can vary between Windows versions. Instead of manually building up the memory layout within the virtual environment, WinVisor launches a suspended instance of the target process and clones the entire address space into the guest. The IAT and TLS data directories are temporarily removed from the PE headers in memory to stop DLL dependencies from loading and to prevent TLS callbacks from executing before reaching the entry point. The process is then resumed, allowing the usual process initialization to continue until it reaches the entry point of the target executable, at which point the hypervisor launches and takes control.
As the WHP API only allows memory from the current process to be mapped into the guest, the main hypervisor logic is encapsulated within a DLL that gets injected into the target process.
At the present time, the emulator simply forwards all syscalls to the host OS and logs them to the console. However, the project provides a framework to easily facilitate syscall hooks if necessary.
## Usage
WinVisor has some limitations in its current form - the biggest one being that it currently only supports virtualizing a single thread. Other examples are described in further detail in the **Limitations** section below.
Despite these limitations, it still works well with many executables. It has been tested successfully against built-in Windows executables such as `cmd.exe`, `ping.exe`, and even GUI applications such as `mspaint.exe` and `notepad.exe` (although these only run partially virtualized as described later).
To launch WinVisor, simply execute the following command:
`WinVisor.exe <target_executable_path>`
Command-line parameters can also be specified for the target application, for example:
`WinVisor.exe c:\windows\system32\ping.exe 8.8.8.8`
If `[ERROR] Failed to initialise Windows Hypervisor Platform API` is displayed, please ensure that `Windows Hypervisor Platform` is installed and enabled in "Windows Features".

*(screenshot above shows WinVisor emulating `cmd.exe` within a virtualized environment)*
## Virtual CPU
The emulator creates a virtual CPU via WHP to execute the target binary. The virtual CPU operates almost exclusively in CPL3 (user-mode), except for a small bootloader that runs at CPL0 (kernel-mode) to initialize the CPU state before execution. The initialization process involves setting up the following aspects:
- Control registers (`CR0`, `CR3`, `CR4`, `XCR0`)
- MSRs (`MSR_EFER`, `MSR_LSTAR`, `MSR_STAR`, `MSR_GS_BASE`)
- GDT
- IDT
- TSS
- Initial segment selectors and register values
- Paging table (4-layer)
Once the initial CPU state has been set up, it switches to CPL3 via a `SYSRET` instruction and begins executing the target application.
The emulator handles both `SYSCALL` instructions and legacy (`INT 2E`) syscalls. To catch system calls performed via the `SYSCALL` instruction, the `MSR_LSTAR` value is set to a reserved placeholder address. This placeholder address exists in kernel space, ensuring that no conflicts occur with real user-mode memory within the process. When the virtual CPU attempts to execute the `SYSCALL` instruction, a page fault exception is generated, causing a VM exit which indicates to the host that a syscall is pending.
Legacy interrupt-based syscalls are handled in a very similar way. The IDT is pre-populated with a range of placeholder handler addresses, causing a VM exit when an interrupt occurs. As the placeholder addresses are unique, the host can easily calculate which interrupt type is pending. In the case of legacy syscalls, an internal wrapper is used to proxy these calls to the same handler that is used by the `SYSCALL` instruction, before returning cleanly via `IRETQ`.
## Memory Paging
As mentioned earlier, the emulator creates a child process, and all virtual memory within that process is mapped directly into the guest using the same address layout. A paging table is used to map virtual addresses to the corresponding physical pages.
Instead of mapping the entire address space of the process upfront, a fixed number of physical pages are allocated for the guest. The emulator contains a very basic memory manager, and pages are mapped "on demand". When a page fault occurs, the requested page will be paged in, and execution resumes. If all page "slots" are full, the oldest entry is swapped out to make room for the new one.
In addition to using a fixed number of currently-mapped pages, the emulator also uses a fixed-size page table. The size of the page table is determined by calculating the maximum possible number of tables (`PML4`, `PDPT`, `PD`, `PT`) for the amount of mapped page entries. This model results in a simple and consistent physical memory layout but comes at the cost of efficiency. In fact, the paging tables take up more space than the actual page entries. However, as the emulator functions well even with a small number of allocated pages, this level of overhead is not a major concern.
## Limitations
**Single-thread only**
The emulator currently only supports virtualizing a single thread. If the target executable creates additional threads, they will be executed natively. To support multiple threads, a pseudo-scheduler could be developed to handle this in the future.
The Windows parallel loader is disabled to ensure all module dependencies are loaded by a single thread.
**Software exceptions**
Virtualized software exceptions are not currently supported. If an exception occurs, the system will call the `KiUserExceptionDispatcher` function natively within the target process as usual.
**Safety issues**
There are several ways to "escape" the VM, such as simply creating a new process/thread, scheduling APC calls, etc. Windows GUI-related syscalls can also make nested calls directly back into user-mode from the kernel, which would currently bypass the hypervisor layer. For this reason, GUI executables such as `notepad.exe` are only partially virtualized when run under WinVisor at this time.
**Shared host memory**
As the WinVisor host DLL is injected into the target process, it exists within the same virtual address space as the target executable in the guest. This means the code running within the virtual CPU is able to directly access the memory within the host hypervisor module, and could potentially corrupt it.
**Non-executable guest memory**
While the virtual CPU is set up to support NX, all memory regions are currently mirrored into the guest with full RWX access.
## Further Reading
This project is described in further detail in the following article: https://www.elastic.co/security-labs/winvisor-hypervisor-based-emulator
During development, I came across a similar project called [Simpleator](https://github.com/ionescu007/Simpleator) by Alex Ionescu. His project also utilizes the WHP API to emulate Windows x64 binaries, but is implemented in a very different way.
|
https://github.com/Taccel-Simulator/Taccel
|
Taccel
Taccel: Scaling-up Vision-based Tactile Robotics with High-performance GPU Simulation
Languages: Cuda (99.7%)
assets
assets
examples
examples
ptx
ptx
taccel
taccel
thirdparty/warp
thirdparty/warp
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
pyproject.toml
pyproject.toml
requirements.txt
requirements.txt
> README.md
# Taccel: Scaling-up Vision-based Tactile Robotics with High-performance GPU Simulation
[**Yuyang Li**](https://yuyangli.com)<sup>1,2 *</sup>,
[**Wenxin Du**](https://dwxrycb123.github.io/)<sup>3 *</sup>,
[**Chang Yu**](https://changyu.io/)<sup>3 *</sup>,
[**Puhao Li**](https://xiaoyao-li.github.io)<sup>2</sup>,
[**Zihang Zhao**](https://zihangzhao.com/)<sup>1</sup>,
[**Tengyu Liu**](https://tengyu.ai)<sup>2</sup>,
[**Chenfanfu Jiang**](https://www.math.ucla.edu/~cffjiang/)<sup>3 †</sup>,
[**Yixin Zhu**](https://yzhu.io)<sup>1 †</sup>,
[**Siyuan Huang**](https://siyuanhuang.com)<sup>2 †</sup>
<sup>1</sup> Institute for AI, PKU
<sup>2</sup> State Key Lab of General AI, BIGAI
<sup>3</sup> AIVC Lab, UCLA
<sup>*</sup> Equal Contributor
<sup>†</sup> Corresponding Author
[📄 [Paper](https://taccel-simulator.github.io/assets/taccel-paper.pdf) ]
[📘 [Docs](https://taccel-simulator.github.io) ]
[🛠️ [Code](https://github.com/Taccel-Simulator/Taccel) ]
[📊 Data (Coming Soon) ]
If you use Taccel in your research, please use the following citation:
```bibtex
@article{li2025taccel,
title={Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation},
author={Li, Yuyang and Du, Wenxin and Yu, Chang and Li, Puhao and Zhao, Zihang and Liu, Tengyu and Jiang, Chenfanfu and Zhu, Yixin and Huang, Siyuan},
journal={arXiv preprint arXiv:2504.12908},
year={2025}
}
```
|
https://github.com/Friends-Security/RedirectThread
|
RedirectThread
Playing around with Thread Context Hijacking. Building more evasive primitives to use as alternative for existing process injection techniques
Languages: C++ (93.1%), C (4.7%), PowerShell (1.6%), CMake (0.6%)
AlertableThreadsForDays
AlertableThreadsForDays
ETWThreadCreationNoise
ETWThreadCreationNoise
RedirectThread
RedirectThread
ShellcodeExamples
ShellcodeExamples
...
.gitattributes
.gitattributes
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
README.md
README.md
> README.md
# RedirectThread
This tool explores various techniques for remote code execution and thread manipulation on Windows, originating from the `CONTEXT` struct.
For a detailed explanation of the research and techniques, please refer to our blog post: **[New Process Injection Class: The CONTEXT-Only Attack Surface](https://blog.fndsec.net/2025/05/16/the-context-only-attack-surface/)**
## TL;DR
Most process injection techniques follow a familiar pattern:
allocate → write → execute.
In this research, we ask: what if we skip allocation and writing entirely?
By focusing on execution-only primitives, we found distinct approaches to inject code without allocating / writing memory:
* Inject a DLL using only `LoadLibraryA`.
* Call arbitrary WinAPI functions with parameters using `SetThreadContext`, without suspending a thread.
* Utilize only `NtCreateThread` to remotely allocate, write and execute shellcode.
* Expand the technique to APC functions such as `QueueUserAPC`.
This isn’t classic thread hijacking — we don’t necessarily suspend/resume a thread mid-execution to overwrite it.
## Projects Included
This solution contains the following main projects:
* **`RedirectThread`**: A tool demonstrating various remote thread injection techniques utilizing the `CONTEXT` struct while avoiding allocating / writing memory remotely (and some ROP gadgets).
* **`AlertableThreadsForDays`**: A utility for creating alertable threads, for testing with APC-based injection methods.
## Usage
```
Usage: C:\RedirectThread.exe [options]
Required Options:
--pid <pid> Target process ID to inject into
--inject-dll Perform DLL injection (hardcoded to "0.dll")
--inject-shellcode <file> Perform shellcode injection from file
--inject-shellcode-bytes <hex> Perform shellcode injection from hex string (e.g. 9090c3)
Delivery Method Options:
--method <method> Specify code execution method
CreateRemoteThread Default, creates a remote thread
NtCreateThread Uses NtCreateThread (less traceable)
QueueUserAPC Uses QueueUserAPC (requires --tid)
QueueUserAPC2 Uses QueueUserAPC2 (requires --tid)
NtQueueApcThread Uses NtQueueApcThread (requires --tid)
NtQueueApcThreadEx Uses NtQueueApcThreadEx (requires --tid)
NtQueueApcThreadEx2 Uses NtQueueApcThreadEx2 (requires --tid)
Context Method Options:
--context-method <method> Specify context manipulation method
rop-gadget Default, uses ROP gadget technique
two-step Uses a two-step thread hijacking approach
Additional Options:
--tid <tid> Target thread ID (required for APC methods)
--alloc-size <size> Memory allocation size in bytes (default: 4096)
--alloc-perm <hex> Memory protection flags in hex (default: 0x40)
--alloc-address <hex> Specify base address for allocation (hex, optional)
--use-suspend Use thread suspension for increased reliability
--verbose Enable verbose output
--enter-debug Pause execution at key points for debugger attachment
Example:
C:\RedirectThread.exe --pid 1234 --inject-dll mydll.dll
C:\RedirectThread.exe --pid 1234 --inject-shellcode payload.bin --verbose
C:\RedirectThread.exe --pid 1234 --inject-shellcode payload.bin --method NtCreateThread
C:\RedirectThread.exe --pid 1234 --inject-shellcode-bytes 9090c3 --method QueueUserAPC --tid 5678
C:\RedirectThread.exe --pid 1234 --inject-shellcode-bytes $bytes --context-method two-step --method NtQueueUserApcThreadEx2 --tid 5678
```
## Building the Project
You can build this project using either CMake or Visual Studio directly with the provided solution file (`RedirectThread.sln`).
### Option 1: Using CMake
This project can be built using CMake. You can either use CMake from the command line (if CMake is installed and in your system's PATH) or leverage the CMake Tools extension if you are using Visual Studio Code.
#### Prerequisites
* A C++ compiler that supports C++17 (e.g., MSVC, GCC, Clang).
* CMake (version 3.10 or higher).
#### Build Steps
The following steps describe building with CMake from the command line. If you are using the CMake Tools extension in VSCode, you can often perform the configuration and build steps through the extension's UI instead of running these commands manually.
1. **Clone the repository:**
```bash
git clone <repository-url>
cd RedirectThread
```
2. **Create a build directory and navigate into it:**
```bash
mkdir build
cd build
```
3. **Configure the project with CMake:**
* For Visual Studio (example for Visual Studio 2019, 64-bit):
```bash
cmake .. -G "Visual Studio 16 2019" -A x64
```
* For Makefiles (example):
```bash
cmake ..
```
* For other generators, please refer to CMake documentation.
4. **Build the project:**
* For Visual Studio:
```bash
cmake --build . --config Release
```
* For Makefiles:
```bash
make
```
Executables will typically be located in a subdirectory within your build folder (e.g., `build/Release` or `build/RedirectThread/Release`).
### Option 2: Using Visual Studio Solution File
1. Open `RedirectThread.sln` in Visual Studio.
2. Select the desired build configuration (e.g., Release, x64).
3. Build the solution (Build > Build Solution).
Executables will be located in the respective project output directories (e.g., `x64/Release`).
|
https://github.com/dipampaul17/KVSplit
|
KVSplit
Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit keys & 4-bit values, reducing memory by 59% with <1% quality loss. Includes benchmarking, visualization, and one-command setup. Optimized for M1/M2/M3 Macs with Metal support.
Languages: Python (73.2%), Shell (26.8%)
.github/workflows
.github/workflows
models
models
patch
patch
plots
plots
results
results
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
perplexity_test_data.txt
perplexity_test_data.txt
> README.md
<div align="center">
# 🚀 KVSplit
**Differentiated KV Cache Quantization for Apple Silicon**
[](https://github.com/dipampaul17/KVSplit/stargazers)
[](LICENSE)
[]()
<img src="./plots/kv_cache_memory_usage.png" alt="KV Cache Memory Usage" width="70%">
</div>
## 📌 Overview
Run **larger context windows** and **heavier LLMs** on your Mac by applying different quantization precision to keys vs values in the attention mechanism's KV cache. KVSplit enables you to:
- **Reduce memory usage by up to 72%** with minimal quality loss
- **Run 2-3x longer contexts** in the same memory budget
- **Maintain or improve inference speed** compared to FP16
- **Optimize for Apple Silicon** with full Metal support
## Key Findings
| Configuration | VRAM @ 8K tokens | Tokens/sec | Perplexity Change |
|---------------|-----------------|------------|-------------------|
| FP16 (base) | 176.00 MB (100%)| 54,360 | -- |
| K8V8 (8-bit) | 93.50 MB (47%) | 51,503 | +0.03% |
| **K8V4** | **71.50 MB (41%)** | **57,438** | **+0.86%** |
| K4V8 | 71.50 MB (41%) | 58,690 | +6.06% |
| K4V4 (4-bit) | 49.50 MB (28%) | 55,193 | +6.15% |
### Memory Savings by Sequence Length
| Configuration | 128 tokens | 2048 tokens | 4096 tokens | 8192 tokens |
|---------------|------------|-------------|-------------|-------------|
| FP16 (baseline) | 5.50 MB | 44.00 MB | 88.00 MB | 176.00 MB |
| K8V8 (8-bit) | 2.92 MB | 23.38 MB | 46.75 MB | 93.50 MB |
| K8V4 (mixed) | 2.23 MB | 17.88 MB | 35.75 MB | 71.50 MB |
| K4V8 (mixed) | 2.23 MB | 17.88 MB | 35.75 MB | 71.50 MB |
| K4V4 (4-bit) | 1.55 MB | 12.38 MB | 24.75 MB | 49.50 MB |
## Features
- Independent quantization of keys and values in the KV cache
- Optimized for Apple Silicon with Metal support
- Comprehensive benchmarking suite with perplexity measurement
- Memory usage and performance analysis tools
- Publication-quality visualization tools
- Easy setup and usage
## Prerequisites
- macOS (tested on Apple Silicon)
- Homebrew package manager
- Xcode Command Line Tools
## ⚡ Flexible Installation
```bash
# Clone the repository
git clone https://github.com/dipampaul17/KVSplit.git
cd kvsplit
# Run the installer script
chmod +x scripts/install_kvsplit.sh
./scripts/install_kvsplit.sh
```
The installer provides flexible options:
### 🐍 Python Setup Options
- **Virtual Environment** (default): Creates a standalone Python environment in the project folder
- **System Python**: Uses your existing Python installation instead of creating a virtual environment
- **Skip Python Setup**: For users who prefer to manage their Python environment manually
### 🔄 llama.cpp Integration Options
- **Standard Method** (default): Clones llama.cpp and applies the KV split patch
- **Git Submodule Method**: Adds llama.cpp as a git submodule (ideal for advanced users or development)
The installer will:
- Set up the project structure with your preferred configuration
- Configure llama.cpp with Metal support optimized for Apple Silicon
- Enable differentiated KV cache quantization
- Offer to download a small test model (optional)
- Set up visualization tools based on your Python preferences
## 🏎️ Quick Comparison
Want to see the benefits immediately? Run a quick comparison with your model:
```bash
# Run quick comparison with different configurations
python scripts/quick_compare.py --model models/your-model.gguf
```
This will show you a side-by-side comparison of FP16, K8V8, K8V4, K4V8, and K4V4 with memory usage, speed, and quality metrics.
## 📊 Impressive Results
<div align="center">
<img src="./plots/memory_vs_quality.png" alt="Memory vs Quality" width="50%">
</div>
### 📉 Memory Reduction
| Configuration | VRAM @ 8K tokens | Memory Savings | Quality Impact |
|---------------|-----------------|----------------|----------------|
| FP16 (base) | 176.00 MB | — | — |
| K8V8 (8-bit) | 93.50 MB | 47% | +0.03% |
| **K8V4** | **71.50 MB** | **59%** | **+0.86%** |
| K4V8 | 71.50 MB | 59% | +6.06% |
| K4V4 (4-bit) | 49.50 MB | 72% | +6.15% |
### 📈 Performance Impact
Using KVSplit doesn't just save memory—it often **improves inference speed** by 5-15%!
| Configuration | Tokens/sec (8K ctx) | Speedup vs FP16 |
|---------------|---------------------|----------------|
| FP16 | 54,360 | — |
| K8V8 | 51,503 | -5.3% |
| **K8V4** | **57,438** | **+5.7%** |
| K4V8 | 58,690 | +8.0% |
| K4V4 | 55,193 | +1.5% |
## 🧠 Project Structure
```
kvsplit/
├── llama.cpp/ # Optimized llama.cpp build
├── models/ # LLM model files
├── scripts/ # Utility scripts
│ ├── benchmark_kvsplit.py # Comprehensive benchmark tool
│ ├── install_kvsplit.sh # One-command installer
│ ├── quick_compare.py # Quick comparison utility
│ ├── capture_memory.sh # GIF creation for memory visualization
│ └── visualize_results.py # Generate publication-quality plots
├── results/ # Benchmark results (CSV/JSON)
├── plots/ # Generated visualizations
└── README.md # This file
```
## 🔬 Scientific Insight
<div align="center">
<img src="./plots/configuration_summary.png" alt="Configuration Summary" width="80%">
</div>
KV cache memory is dominated by storing key and value vectors for each token. Our research has revealed a critical insight: **keys are significantly more sensitive to quantization than values**.
### 🔑 Key Findings
- **Asymmetric Impact**: Keys require higher precision than values for maintaining quality
- **Sweet Spot**: K8V4 (8-bit keys, 4-bit values) provides optimal balance
- Only 0.86% perplexity degradation vs. FP16
- 59% memory reduction
- Faster inference than FP16
- **Confirmation**: K4V8 configuration shows 7x more quality degradation than K8V4, despite using the same total bits
This asymmetry allows for more efficient memory usage without compromising model quality, enabling longer context windows and larger models on consumer hardware.
## 💻 Usage Examples
### Running with Different KV Cache Precisions
```bash
# Baseline (FP16)
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn
# ⭐ RECOMMENDED: 8-bit keys, 4-bit values (K8V4)
# Best balance of quality and memory savings
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq 8
# 4-bit keys, 8-bit values (K4V8)
# Shows why key precision matters more than value precision
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq-key 4 --kvq-val 8
# 4-bit keys and values (K4V4)
# Maximum memory savings (72% reduction) with acceptable quality
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf -p "Your prompt" \
-t 8 --flash-attn --kvq 4
```
### Long Context Example (32K)
```bash
# Run with a 32K context (would require ~1.4GB in FP16, only ~400MB with K8V4)
./llama.cpp/build/bin/llama-cli -m models/your-model.gguf \
-c 32768 -n 4096 -t 8 --flash-attn --kvq 8 \
-f your-long-document.txt
```
### 🚩 Command-Line Arguments
| Flag | Description | Recommendation |
|------|-------------|---------------|
| `-t 8` | Number of threads | 8 is optimal for most Apple Silicon chips |
| `--flash-attn` | Enables optimized attention | Recommended for Apple Silicon |
| `--kvq N` | Sets both key and value bits to N | Use `--kvq 8` for K8V4 configuration |
| `--kvq-key N` | Sets key bits only | Key precision has major quality impact |
| `--kvq-val N` | Sets value bits only | Value precision has minor quality impact |
| `-c N` | Context size in tokens | Longer contexts benefit more from KVSplit |
| `-n N` | Number of tokens to generate | Adjust based on your needs |
| `-f FILE` | Input file | For processing documents |
| `-m MODEL` | Model path | Path to your .gguf model file |
## 📏 Advanced Benchmarking
For comprehensive performance analysis, use our full benchmark suite:
```bash
# Run the full benchmark suite (all configurations and sequence lengths)
python scripts/benchmark_kvsplit.py
# Run a specific configuration test
python scripts/benchmark_kvsplit.py --config K8V4 --seq-len 4096
# Generate publication-quality visualizations
python scripts/visualize_results.py
```
The benchmarking script provides thorough measurements of:
- 📊 **Memory Usage**: VRAM and KV cache specifically
- ⚡ **Performance**: Tokens per second across different sequence lengths
- 🎯 **Quality**: Perplexity measurement using llama-perplexity
- 📈 **Scaling**: How memory usage and performance scale with sequence length
Results are saved in CSV/JSON formats with automatic summary statistics, and the visualization script generates publication-quality plots showing key insights.
## License
MIT
## 🎬 Visual Memory Savings
You can visualize memory savings with our capture tool:
```bash
# Capture memory reduction in Activity Monitor
./scripts/capture_memory.sh
```
<div align="center">
<table>
<tr>
<td><img src="./plots/kv_cache_memory_usage.png" alt="Memory Usage" width="100%"></td>
<td><img src="./plots/key_value_sensitivity.png" alt="Key-Value Sensitivity" width="100%"></td>
</tr>
<tr>
<td><img src="./plots/perplexity_change.png" alt="Quality Impact" width="100%"></td>
<td><img src="./plots/inference_speed.png" alt="Speed Impact" width="100%"></td>
</tr>
</table>
</div>
## 🍎 Apple Silicon Optimization
- **Metal Performance**: Fully optimized for Apple's Metal framework
- **Memory Efficiency**: Critical for memory-constrained M series Apple silicon devices
- **Activity Monitor**: Use our `capture_memory.sh` script to visualize real-time memory reductions
- **Alignment**: 256B page alignment in llama.cpp means actual memory savings might differ slightly from theoretical calculations
## ⭐ Key Features
- **Differentiated Precision**: Independent key and value bit precision (K8V4, K4V8, etc)
- **Apple Silicon Optimization**: Full Metal support for M1/M2/M3/M4 chips
- **Comprehensive Benchmarking**: Memory, speed, and quality metrics
- **Publication-Quality Visualization**: Beautiful plots for analysis
- **Simple User Interface**: One-command install and quick comparison tools
- **Memory Visualization**: Tools to capture and visualize memory savings
## 🙏 Acknowledgments
This project implements ideas from recent research including:
- "More for Keys, Less for Values: Adaptive KV Cache Quantization" (2024)
- "Unifying KV Cache Compression for Large Language Models with LeanKV" (2025)
Additional credits:
- [llama.cpp](https://github.com/ggerganov/llama.cpp) - Base implementation
- [TinyLlama](https://huggingface.co/TinyLlama) - Test model
## Contributing
Contributions are welcome! Please open an issue or submit a pull request.
## 🧠 Configuration Recommendations
- **Best Overall**: 🌟 **K8V4** 🌟 (8-bit keys, 4-bit values)
- 59% memory reduction with only 0.86% quality loss
- Improved inference speed (+5.7% vs FP16)
- Great balance of quality and efficiency
- **Absolute Maximum Memory Savings**: K4V4 (4-bit keys and values)
- 72% memory reduction with ~6% quality loss
- Good for memory-constrained devices
- Acceptable for less sensitive applications
- **Best for Very Long Contexts**: K8V4 or K4V4
- Memory savings compound with context length
- Run 2-3x longer contexts in the same memory budget
## 🔮 Future Roadmap
- [ ] **Adaptive Precision**: Dynamic precision based on token importance
- [ ] **Layer-Specific Quantization**: Different precision for different model layers
- [ ] **Model-Specific Optimizations**: Tailored for Mistral, Phi-3, etc.
- [ ] **Web Demo**: Interactive testing environment
- [ ] **Mobile Support**: Adapting for iOS and iPadOS
## 📜 License
MIT
## 🤝 Contributing
Contributions are welcome! Please open an issue or submit a pull request.
|
https://github.com/vivoblueos/kernel
|
kernel
Languages: Rust (96.2%), C (2.5%)
CREDITS
CREDITS
emballoc
emballoc
header
header
images
images
infra
infra
...
.gitignore
.gitignore
.licenserc.yaml
.licenserc.yaml
LICENSE
LICENSE
README.md
README.md
README_zh.md
README_zh.md
> README.md
<div align="center">
<img src="./images/logo.png" width="280" />
</div>
\[ English | [简体中文](README_zh.md) \]
# BlueOS Kernel
BlueOS kernel is written in Rust, featuring security, lightweight, and generality. It is compatible with POSIX interfaces and supports Rust's standard library.
## Technical Architecture
For details, please visit the BlueOS official website [kernel](https://blueos.vivo.com/kernel) page.
## Board Support
BlueOS kernel currently supports ARM32, ARM64, RISCV32 and RISCV64 chip architectures.
- QEMU platforms are supported for corresponding chip architectures.
- Hardware boards support is currently in progress.
## Repository Overview
| Repository Link | Description |
|----------------|-------------|
| apps | [Shell](https://github.com/vivoblueos/apps_shell) and [examples](https://github.com/vivoblueos/apps_example) developed based on Rust std |
| [book](https://github.com/vivoblueos/book) | Kernel technical documentation and tutorials, including detailed kernel development guides |
| [build](https://github.com/vivoblueos/build) | Project compilation build templates and scripts |
| [kernel](https://github.com/vivoblueos/kernel) | Core kernel repository, including CPU architecture support, system scheduler,sync primitives, async executor, memory management subsystem, file system, network subsystem, device subsystem, etc |
| [libc](https://github.com/vivoblueos/libc) | BlueOS kernel libc header files, forked from [rust-lang/libc](https://github.com/rust-lang/libc) |
| [librs](https://github.com/vivoblueos/librs) | BlueOS kernel libc implementation based on Rust programming language |
# Getting started with the kernel development
To build and work with the BlueOS kernel, please check following documentations.
- [Prepare basic build environment](https://github.com/vivoblueos/book/blob/main/src/getting-started.md)
- [Build customized Rust toolchain](https://github.com/vivoblueos/book/blob/main/src/build-rust-toolchain.md)
- [Work with the kernel](https://github.com/vivoblueos/book/blob/main/src/build-kernel.md)
# Technical Documentation
For more information about the BlueOS kernel, please refer to [the kernel book](https://github.com/vivoblueos/book).
|
https://github.com/iyush/COS
|
COS
Tiny x86_64 OS in C
Languages: C (92.7%), Assembly (2.3%), Linker Script (2.3%), Shell (2.2%)
kernel
kernel
userland
userland
...
.bochsrc
.bochsrc
.gitignore
.gitignore
README.md
README.md
build.sh
build.sh
debug.sh
debug.sh
> README.md
# COS
Tiny x86_64 Operating System written in C. The OS can:
1. Handle Interrupts.
2. Allocate Physical Memory
3. Load Executables (ELF).
4. Premptively Schedule tasks
5. Do syscalls
The OS does not currently have (but will at some point in the future):
1. Virtual Memory Manager (It is very simple atm).
2. Graphics Stack
3. Networking Stack
## Building
Make sure you have [nix](https://nixos.org/) installed. Make sure that you have pulled this repo recursively to pull limine. The current limine version supported is:
```
HEAD detached at origin/v7.x-binary
```
1. Pop into nix-shell
```
nix-shell
```
2. Build limine
```
cd limine
make
```
3. Build the OS and Userland
```
./build.sh
```
## Running
Run:
```
./run.sh
```
## Debugging
Run:
```
./debug.sh
```
|
https://github.com/google-ai-edge/LiteRT-LM
|
LiteRT-LM
Languages: C++ (88.2%), Starlark (7.8%), Python (4.0%)
.github/workflows
.github/workflows
prebuilt/android_arm64
prebuilt/android_arm64
python
python
runtime
runtime
schema
schema
...
.bazelrc
.bazelrc
.bazelversion
.bazelversion
.gitignore
.gitignore
BUILD
BUILD
BUILD.darts_clone
BUILD.darts_clone
> README.md
# LiteRT-LM
A C++ library to efficiently run language models across edge platforms.
## Description
Language models are no longer a single model but really a pipeline of models and
components working together. LiteRT-LM builds on top of
[LiteRT](https://github.com/google-ai-edge/LiteRT) to enable these pipelines
including:
* **C++ api** to efficiently run language models
* **Cross-Platform** support via portable C++ for broad deployment scenarios
* **Flexible** so you can customize for your specific feature
* **Hardware Acceleration** to unlock the full potential of your device's
hardware
### Status: Early Preview
Expect our first full release of LiteRT-LM late summer / early fall. We heard
the community feedback regarding Google AI Edge's Gemma 3n LiteRT preview. You
want access on more platforms, more visibility into the underlying stack, and
more flexibility. LiteRT-LM can help with all three.
### 🚀 What's New
* ***June 24, 2025*** **: Run Gemma models with NPU Support (`v0.7.0`)**
Unlock significant performance gains! Our latest release leverages the power
of Neural Processing Units (NPUs) on devices with Qualcomm and MediaTek
chipsets to run the Gemma3 1B model with incredible efficiency.
**Note:** LiteRT-LM NPU acceleration is only available through an Early
Access Program. Please check out
[this page](https://ai.google.dev/edge/litert/next/npu) for more information
about how to sign it up.
* ***June 10, 2025*** **: The Debut of LiteRT-LM: A New Framework for
On-Device LLMs** We're proud to release an early preview (`v0.6.1`) of the
LiteRT-LM codebase! This foundational release enables you to run the latest
Gemma series models across a wide range of devices with initial support for
CPU execution and powerful GPU acceleration on Android.
### Supported Backends & Platforms
Platform | CPU Support | GPU Support | NPU Support |
:----------- | :---------: | :-----------: | :-----------:
**Android** | ✅ | ✅ | ✅ |
**macOS** | ✅ | *Coming Soon* | - |
**Windows** | ✅ | *Coming Soon* | - |
**Linux** | ✅ | *Coming Soon* | - |
**Embedded** | ✅ | *Coming Soon* | - |
### Supported Models and Performance
Currently supported models during our Preview (as `.litertlm` format).
Model | Quantization | Context size | Model Size (Mb) | Download link
:---------- | :---------------: | :----------: | :-------------: | :-----------:
Gemma3-1B | 4-bit per-channel | 4096 | 557 | [download](https://huggingface.co/litert-community/Gemma3-1B-IT/blob/main/Gemma3-1B-IT_multi-prefill-seq_q4_ekv4096.litertlm)
Gemma3n-E2B | 4-bit per-channel | 4096 | 2965 | [download](https://huggingface.co/google/gemma-3n-E2B-it-litert-lm-preview)
Gemma3n-E4B | 4-bit per-channel | 4096 | 4235 | [download](https://huggingface.co/google/gemma-3n-E4B-it-litert-lm-preview)
Below are the performance numbers of running each model on various devices. Note
that the benchmark is measured with 1024 tokens prefill and 256 tokens decode (
with performance lock on Android devices).
| Model | Device | Backend | Prefill (tokens/sec) | Decode (tokens/sec) | Context size |
| :--- | :--- | :--- | :--- | :--- | :--- |
| Gemma3-1B | MacBook Pro<br>(2023 M3) | CPU | 422.98 | 66.89 | 4096 |
| Gemma3-1B | Samsung S24<br>(Ultra) | CPU | 243.24 | 43.56 | 4096 |
| Gemma3-1B | Samsung S24<br>(Ultra) | GPU | 1876.5 | 44.57 | 4096 |
| Gemma3-1B | Samsung S25<br>(Ultra) | NPU | 5836.6 | 84.8 | 1280 |
| Gemma3n-E2B | MacBook Pro<br>(2023 M3) | CPU | 232.5 | 27.6 | 4096 |
| Gemma3n-E2B | Samsung S24<br>(Ultra) | CPU | 110.5 | 16.1 | 4096 |
| Gemma3n-E2B | Samsung S24<br>(Ultra) | GPU | 816.4 | 15.6 | 4096 |
| Gemma3n-E4B | MacBook Pro<br>(2023 M3) | CPU | 170.1 | 20.1 | 4096 |
| Gemma3n-E4B | Samsung S24<br>(Ultra) | CPU | 73.5 | 9.2 | 4096 |
| Gemma3n-E4B | Samsung S24<br>(Ultra) | GPU | 548.0 | 9.4 | 4096 |
## Quick Start
This guide provides the necessary steps to build and execute a Large Language
Model (LLM) on your device. Note that the LiteRT-LM runtime is designed to work
with models in the `.litertlm` format. You can find and download compatible
models in the
[Supported Models and Performance](#supported-models-and-performance) section.
**Want to try it out first?** Before proceeding with the full setup, you can use
the pre-built binary below to run the LiteRT-LM immediately:
- [Android Arm64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.android_arm64)
- [MacOS](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.macos_arm64)
- [Linux x86_64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.linux_x86_64)
- [Windows x86_64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.windows_x86_64.exe)
- [iOS Arm64](https://github.com/google-ai-edge/LiteRT-LM/releases/latest/download/litert_lm_main.ios_sim_arm64)
*Tip: you may have to explicitly approve the usage of pre-built binaries. For
example, in MacOS, you should go to **System Settings > Privacy & Security >
Security** to approve the binary. *
### Prerequisites
Before you begin, please ensure you have the following installed:
- **Git**: To clone the repository and manage versions.
- **Bazel (version 7.6.1)**: This project uses `bazel` as its build system.
#### Get the Source Code
Current stable branch tag: [](https://github.com/google-ai-edge/LiteRT-LM/releases/latest)
First, clone the repository to your local machine. We strongly recommend
checking out the latest stable release tag to ensure you are working with a
stable version of the code.
**Clone the repository:**
```
git clone git@github.com:google-ai-edge/LiteRT-LM.git
cd LiteRT-LM
```
**Fetch the latest tags from the remote repository:**
```
git fetch --tags
```
**Checkout the latest stable release ([](https://github.com/google-ai-edge/LiteRT-LM/releases/latest)):**
To start working, create a new branch from the stable tag. This is the
recommended approach for development.
```
git checkout -b <my-feature-branch> <release-tag, e.g. "v0.6.1">
```
You are now on a local branch created from the tag and ready to work.
#### Install Bazel
This project requires Bazel version **7.6.1**. You can skip this if you already
have it set up.
The easiest way to manage Bazel versions is to install it via
[Bazelisk](https://github.com/bazelbuild/bazelisk). Bazelisk will automatically
download and use the correct Bazel version specified in the project's
.bazelversion file.
Alternatively, you can install Bazel manually by following the official
installation [instructions](https://bazel.build/install) for your platform.
### Build and Run the Command Line Demo
**LiteRT-LM** allows you to deploy and run LLMs on various platforms, including
Android, Linux, MacOS, and Windows. `runtime/engine/litert_lm_main.cc` is a
[command line demo](#litert_lm_main) that shows how to initialize and interact
with the model.
Please check the corresponding section below depending on your target deployment
device and your development platform.
<details>
<summary><strong>Deploy to Windows</strong></summary>
Building on Windows requires several prerequisites to be installed first.
#### Prerequisites
1. **Visual Studio 2022** - Install from Microsoft Store to get the MSVC
toolchain.
2. **Git for Windows** - Install from https://git-scm.com/download/win
(includes Git Bash needed for flatbuffer generation scripts).
3. **Python 3.11** - Install from Microsoft Store for Python dependencies.
4. **Bazel** - Install using Windows Package Manager (winget): `powershell
winget install --id=Bazel.Bazelisk -e`.
5. Download the `.litertlm` model from the
[Supported Models and Performance](#supported-models-and-performance)
section.
#### Building and Running
Once you've downloaded the `.litertlm` file, set the path for convenience:
```powershell
$Env:MODEL_PATH = "C:\path\to\your_model.litertlm"
```
Build the binary:
```powershell
# Build litert_lm_main for Windows.
bazelisk build //runtime/engine:litert_lm_main --config=windows
```
Run the binary (make sure you run the following command in **powershell**):
```powershell
# Run litert_lm_main.exe with a model .litertlm file.
bazel-bin\runtime\engine\litert_lm_main.exe `
--backend=cpu `
--model_path=$Env:MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to Linux / Embedded</strong></summary>
`clang` is used to build LiteRT-LM on linux. Build `litert_lm_main`, a CLI
executable and run models on CPU. Note that you should download the `.litertlm`
model from the
[Supported Models and Performance](#supported-models-and-performance) section.
Note that one can also deploy the model to Raspberry Pi using the same setup and
command in this section.
Once you've downloaded the `.litertlm` file, set the path for convenience:
```
export MODEL_PATH=<path to your .litertlm file>
```
Build the binary:
```
bazel build //runtime/engine:litert_lm_main
```
Run the binary:
```
bazel-bin/runtime/engine/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to MacOS</strong></summary>
Xcode command line tools include clang. Run `xcode-select --install` if not
installed before. Note that you should download the `.litertlm` model from the
[Supported Models and Performance](#supported-models-and-performance) section.
Once you've downloaded the `.litertlm` file, set the path for convenience:
```
export MODEL_PATH=<path to your .litertlm file>
```
Build the binary:
```
bazel build //runtime/engine:litert_lm_main
```
Run the binary:
```
bazel-bin/runtime/engine/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
</details>
<details>
<summary><strong>Deploy to Android</strong></summary>
To be able to interact with your Android device, please make sure you've
properly installed
[Android Debug Bridge](https://developer.android.com/tools/adb) and have a
connected device that can be accessed via `adb`.
**Note:** If you are interested in trying out LiteRT-LM with NPU acceleration,
please check out [this page](https://ai.google.dev/edge/litert/next/npu) for
more information about how to sign it up for an Early Access Program.
<details>
<summary><strong>Develop in Linux</strong></summary>
To be able to build the binary for Android, one needs to install NDK r28b or
newer from https://developer.android.com/ndk/downloads#stable-downloads.
Specific steps are:
- Download the `.zip` file from
https://developer.android.com/ndk/downloads#stable-downloads.
- Unzip the `.zip` file to your preferred location (say
`/path/to/AndroidNDK/`)
- Make `ANDROID_NDK_HOME` to point to the NDK directory. It should be
something like:
```
export ANDROID_NDK_HOME=/path/to/AndroidNDK/
```
*Tips: make sure your `ANDROID_NDK_HOME` points to the directory that has
`README.md` in it.*
With the above set up, let's try to build the `litert_lm_main` binary:
```
bazel build --config=android_arm64 //runtime/engine:litert_lm_main
```
</details>
<details>
<summary><strong>Develop in MacOS</strong></summary>
Xcode command line tools include clang. Run `xcode-select --install` if not
installed before.
To be able to build the binary for Android, one needs to install NDK r28b or
newer from https://developer.android.com/ndk/downloads#stable-downloads.
Specific steps are:
- Download the `.dmg` file from
https://developer.android.com/ndk/downloads#stable-downloads.
- Open the `.dmg` file and move the `AndroidNDK*` file to your preferred
location (say `/path/to/AndroidNDK/`)
- Make `ANDROID_NDK_HOME` to point to the NDK directory. It should be
something like:
```
export ANDROID_NDK_HOME=/path/to/AndroidNDK/AndroidNDK*.app/Contents/NDK/
```
*Tips: make sure your `ANDROID_NDK_HOME` points to the directory that has
`README.md` in it.*
With the above set up, let's try to build the `litert_lm_main` binary:
```
bazel build --config=android_arm64 //runtime/engine:litert_lm_main
```
</details>
After the binary is successfully built, we can now try to run the model on
device. Make sure you have the write access to the `DEVICE_FOLDER`:
In order to run the binary on your Android device, we have to push a few assets
/ binaries. First set your `DEVICE_FOLDER`, please make sure you have the write
access to it (typically you can put things under `/data/local/tmp/`):
```
export DEVICE_FOLDER=/data/local/tmp/
adb shell mkdir -p $DEVICE_FOLDER
```
To run with **CPU** backend, simply push the main binary and the `.litertlm`
model to device and run.
```
# Skip model push if it is already there
adb push $MODEL_PATH $DEVICE_FOLDER/model.litertlm
adb push bazel-bin/runtime/engine/litert_lm_main $DEVICE_FOLDER
adb shell $DEVICE_FOLDER/litert_lm_main \
--backend=cpu \
--model_path=$DEVICE_FOLDER/model.litertlm
```
To run with **GPU** backend, we need additional `.so` files. They are located in
the `prebuilt/` subfolder in the repo (we currently only support `arm64`).
```
# Skip model push if it is already there
adb push $MODEL_PATH $DEVICE_FOLDER/model.litertlm
adb push prebuilt/android_arm64/*.so $DEVICE_FOLDER
adb push bazel-bin/runtime/engine/litert_lm_main $DEVICE_FOLDER
adb shell LD_LIBRARY_PATH=$DEVICE_FOLDER \
$DEVICE_FOLDER/litert_lm_main \
--backend=gpu \
--model_path=$DEVICE_FOLDER/model.litertlm
```
Note that the first time a given model is loaded on a given device, it will take
longer to load. This is because the model weights are being arranged to run
optimally on your particular device's GPU. Subsequent loads will be faster
because the optimized weights are cached on your device.
</details>
### Command Line Demo Usage <span id="litert_lm_main"></span>
`litert_lm_main` is a command line demo for running and evaluating large
language models (LLMs) using our LiteRT [Engine/Session interface](#engine). It
provides basic functionalities as the following:
- generating text based on a user-provided prompt.
- executing the inference on various hardware backends, e.g. CPU / GPU.
- includes options for performance analysis, allowing users to benchmark
prefill and decoding speeds, as well as monitor peak memory consumption
during the run.
- supports both synchronous and asynchronous execution modes.
Below are a few example commands (please update accordingly when using `adb`):
**Run the model with default prompt**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH
```
**Benchmark the model performance**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--model_path=$MODEL_PATH \
--benchmark \
--benchmark_prefill_tokens=1024 \
--benchmark_decode_tokens=256 \
--async=false
```
*Tip: when benchmarking on Android devices, remember to use `taskset` to pin the
executable to the main core for getting the consistent numbers, e.g. `taskset
f0`.*
**Run the model with your prompt**
```
<path to binary directory>/litert_lm_main \
--backend=cpu \
--input_prompt=\"Write me a song\"
--model_path=$MODEL_PATH
```
More detailed description about each of the flags are in the following table:
| Flag Name | Description | Default Value |
| :--- | :--- | :--- |
| `backend` | Executor backend to use for LLM execution (e.g., cpu, gpu). | `"gpu"` |
| `model_path` | Path to the `.litertlm` file for LLM execution. | `""` |
| `input_prompt` | Input prompt to use for testing LLM execution. | `"What is the tallest building in the world?"` |
| `benchmark` | Benchmark the LLM execution. | `false` |
| `benchmark_prefill_tokens` | If benchmark is true and this value is > 0, the benchmark will use this number to set the prefill tokens, regardless of the input prompt. If this is non-zero, `async` must be `false`. | `0` |
| `benchmark_decode_tokens` | If benchmark is true and this value is > 0, the benchmark will use this number to set the number of decode steps, regardless of the input prompt. | `0` |
| `async` | Run the LLM execution asynchronously. | `true` |
| `report_peak_memory_footprint` | Report peak memory footprint. | `false` |
## LiteRT-LM API <span id="engine"></span>
The LiteRT-LM provides a C++ API for executing Language Models. It is designed
around two primary classes: `Engine` and `Session`.
- The **`Engine`** is the main entry point. It's responsible for loading the
model and its associated resources (like the tokenizer) from storage and
preparing them for execution. It acts as a factory for creating `Session`
objects.
- The **`Session`** represents a single, stateful conversation or interaction
with the LLM. It holds the context (like conversation history) and provides
the methods to actually generate text. Each `Session` is an independent
instance, allowing for multiple interactions.
### Basic Workflow for Text-in-Text-out Inference
The typical lifecycle for using the runtime is:
1. **Create an `Engine`**: Initialize a single `Engine` with the model path and
configuration. This is a heavyweight object that holds the model weights.
2. **Create a `Session`**: Use the `Engine` to create one or more lightweight
`Session` objects.
3. **Generate Content**: Use a `Session` object to run inference, either
through a simple one-shot API or through more granular prefill/decode steps.
Below is the simplest way to generate text and is recommended for most use
cases. It mirrors
[Gemini text generation APIs](https://ai.google.dev/gemini-api/docs).
- `GenerateContent`: A blocking call that takes user input and returns the
complete model response.
- `GenerateContentStream`: A non-blocking call that streams the model's
response back token-by-token through an observer.
Example code snippet:
```cpp
#include "third_party/odml/litert_lm/runtime/engine/engine.h"
// ...
// 1. Define model assets and engine settings.
auto model_assets = ModelAssets::Create(model_path);
CHECK_OK(model_assets);
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::CPU);
// 2. Create the main Engine object.
absl::StatusOr<std::unique_ptr<Engine>> engine = Engine::CreateEngine(engine_settings);
CHECK_OK(engine);
// 3. Create a Session for a new conversation.
auto session_config = SessionConfig::CreateDefault();
absl::StatusOr<std::unique_ptr<Engine::Session>> session = (*engine)->CreateSession(session_config);
CHECK_OK(session);
// 4. Generate content using the high-level API.
absl::StatusOr<Responses> responses = (*session)->GenerateContent(
{InputText("What is the tallest building in the world?")});
CHECK_OK(responses);
// 5. Print the response.
std::cout << *responses << std::endl;
```
### Inference with GPU Backend
On Android, the runtime can pick GPU as the backend for inference instead of
CPU, by passing `litert::lm::Backend::GPU` in `EngineSettings::CreateDefault()`.
```cpp
// ...
// Set GPU as backend instead of CPU.
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::GPU);
// ...
```
When the engine is created, it looks for `libLiteRtGpuAccelerator.so` and
`libLiteRtTopKSampler.so` from the directories specified in `LD_LIBRARY_PATH`,
rpath in the app binary or default location by system dynamic linker. For
example, if an app binary and .so files are packaged in an APK by Android SDK,
.so files are unpacked by Android Package Manager where the app binary can find
them, i.e. under app's `/lib` directory.
### Advanced Control Over Prefill/Decode
This API provides fine-grained control over the two phases of transformer
inference: prefill and decode. This can be useful for advanced scenarios or
performance optimization.
- **Prefill**: The `RunPrefill` or `RunPrefillAsync` methods process the input
prompt and populate the model's internal state (KV cache).
- **Decode**: The `RunDecode` or `RunDecodeAsync` methods generate new tokens
one at a time based on the prefilled state.
Example code snippet:
```cpp
#include "third_party/odml/litert_lm/runtime/engine/engine.h"
// ...
// 1. Define model assets and engine settings.
auto model_assets = ModelAssets::Create(model_path);
CHECK_OK(model_assets);
auto engine_settings = EngineSettings::CreateDefault(
model_assets, litert::lm::Backend::CPU);
// 2. Create the main Engine object.
absl::StatusOr<std::unique_ptr<Engine>> engine = Engine::CreateEngine(engine_settings);
CHECK_OK(engine);
// 3. Create a Session for a new conversation.
auto session_config = SessionConfig::CreateDefault();
absl::StatusOr<std::unique_ptr<Engine::Session>> session = (*engine)->CreateSession(session_config);
CHECK_OK(session);
// 4. Prefill some prompts.
CHECK_OK((*session)->RunPrefill({InputText("What's the tallest building in the world?")}));
CHECK_OK((*session)->RunPrefill({InputText(" and what's the tallest building in the United States?")}));
// 5. Start decoding.
auto responses = (*session)->RunDecode();
// 6. Print the response.
std::cout << *responses << std::endl;
```
## FAQ
### LiteRT vs LiteRT-LM vs MediaPipe GenAI Tasks
LiteRT, LiteRT-LM, and MediaPipe GenAI Tasks are three libraries within the
Google AI Edge stack that build on each other. By exposing functionality at
different abstraction layers, we hope to enable developers to balance their
respective needs between flexibility and complexity.
[LiteRT](https://ai.google.dev/edge/litert) is Google AI Edge's underlying
on-device runtime. Developer can convert individual PyTorch, TensorFlow, and JAX
models to LiteRT and run them on-device.
**LiteRT-LM** gives developers the pipeline framework to stitch together
multiple LiteRT models with pre and post processing components (e.g. tokenizer,
vision encoder, text decoder).
[MediaPipe GenAI Tasks](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference)
are out-of-the-box native APIs (Kotlin, Swift, JS) to run langauge models by
just setting a few parameters such as temperature and topK.
### .litertlm vs .task
MediaPipe GenAI Tasks currently use `.task` files to represent language models.
Task files are a zip of multiple LiteRT files, components, and metadata.
`.litertlm` is an evolution of the `.task` file format to include additional
metadata and enable better compression.
During our LiteRT-LM preview, we will release a small number of `.litertlm`
files. MediaPipe APIs will continue to use `.task` files. Once we have the
first full release of LiteRT-LM, we will migrate MediaPipe APIs to use the new
`.litertlm` files and release a wider collection of `.litertlm` files on the
[LiteRT Hugging Face Community](https://huggingface.co/litert-community)
## Reporting Issues
If you encounter a bug or have a feature request, we encourage you to use the
[GitHub Issues](https://github.com/google-ai-edge/LiteRT-LM/issues/new) page to
report it.
Before creating a new issue, please search the existing issues to avoid
duplicates. When filing a new issue, please provide a clear title and a detailed
description of the problem, including steps to reproduce it. The more
information you provide, the easier it will be for us to help you.
|
https://github.com/Foreseerr/TScale
|
TScale
Languages: C++ (69.1%), Cuda (28.2%), C (2.1%)
cfg
cfg
code
code
doc
doc
fo
fo
img
img
...
.gitignore
.gitignore
Dockerfile
Dockerfile
LICENSE
LICENSE
README.md
README.md
test.cfg
test.cfg
> README.md
# TScale
This repo contains transformer train and inference code written in C++ and CUDA.
TScale is designed to run on consumer hardware. To achive best results it features
- Optimized transformer architecture with faster convergence and ~2x reduced attention costs
- Support for fp8 and int8 model weights and activations precision
- Optimized for consumer nVidia GPUs including fast reduced precision training without sacrificing model quality
- CPU offload reduces GPU memory requirements for training
- Sync distributed training on several same config hosts
- 1-bit gradient compression allowing using regular ethernet links for interconnect
- Async distributed training on arbitrary hosts with negligible network traffic. In this mode training can be run on geographically separated hosts
# Distributed training of 1.5B model on consumer GPU
By using inexpensive GPUs and async distributed mode TScale trains LLMs fast and affordable. Log loss for the 1.5B model trained on fineweb-edu for 2 days and $500 on several spot instances with 4090:

# Training your own 1T model at home
1T model size sounds beyond reach for most people and even organisations. However if we consider creative ways to count model size then there is nothing impossible. In this case we build a model with 1T index which we lookup for every token to make prediction with much smaller model. In terms of logloss/perplexity this construction easily achieves stellar results. Index for fineweb-edu occupies about 1T of disk space. Training run of 125M model with this ~1T index achieves **x8** perplexity reduction:
|Model|Perplexity|
|-----|-|
|125M |19.02|
|125M + 1T index|2.28|
# Read more
[Training 125M model](doc/125M_model.md)
[Training 1.5B model](doc/1.5B_model.md)
[Training 1T (!) model in your kitchen](doc/1T_model.md)
[Async distributed train](doc/fed.md)
[Notes on model and compute precision](doc/precision.md)
[TScale transformer model](doc/model.md)
[Data indexing](doc/lm_search.md)
[Tokenizer](doc/tokenizer.md)
# Build
To build the the code CUDA v12.3 and C++ compiler are required, msvc for windows, cmake+clang for Linux. To support cross platform build files generation this repo uses [fo](doc/fo.md), lightweight solution/build files generator. To generate build files you need to compile fo/fo.cpp and run it with two arguments. First argument is root of source tree, second argument is directory to store build files to.
## Windows
```bash
D:\TScale>fo.exe code sln
```
Then open code.sln from d:\TScale\sln\code.sln.
## Linux
To compile TScale for linux you need to compile fo.cpp, generate CMakeLists.txt file, run cmake, run make.
```bash
~/TScale/fo$ clang++-18 fo.cpp -o fo
~/TScale/fo$ cd ..
~/TScale$ ./fo/fo code make.dir
~/TScale$ cd make.dir
~/TScale/make.dir$ cmake -D CMAKE_BUILD_TYPE=RelWithDebInfo .
~/TScale/make.dir$ make
```
# Get train data
Examples in the code use [enwik9](https://mattmahoney.net/dc/textdata.html) dataset and its truncacted version enwik8. Also Hugging Face hosted datasets openwebtext, ontocord/CulturaY, danasone/librusec are used in examples. To import them use [hf_import](/pysrc/hf_import/import.py).
# Train model
[gpt_train](/code/gpt/train) is used to train a model. It is controlled by the [train script](/doc/train_script.md) and [data script](/doc/data_script.md). Default scripts are stored in [main_gpt.cpp](/code/gpt/train/main_gpt.cpp). To load train script from file run gpt_train with '-d data_script.txt -s train_script.txt' arguments.
## quick run
Compile gpt-train. Run it in the root directory:
```bash
~/TScale$ ./make.dir/gpt-train
```
## sync distributed run
Currently training can be distributed only among pow2 number of worker hosts.
To start a worker process run gpt_train with '-w 10000' argument. 10000 specifies port number to use.
To run master process call net_train('worker.txt') function in train script. List worker IP addresses in the file provided to net_train().
## multiple GPU
To use multiple GPU devices set DEVICE_COUNT variable in train script to number of GPUs to use. For distributed runs DEVICE_COUNT is applied on each worker, heterogeneous configurations are not supported.
## scripts
Description of scripts used in training: [data script](doc/data_script.md), [train script](doc/train_script.md)
# Inference test
To try inferencing from the trained model you can use [gpt_infer](/code/gpt/infer). It runs basic http server on 11311 port and allows sampling continuations from the model. Current implementation is slow and designed for demonstration purposes only.
# License
MIT
|
https://github.com/ashvardanian/fork_union
|
fork_union
Low(est?)-latency OpenMP-style minimalistic scoped thread-pool designed for 'Fork-Join' parallelism in Rust and C++, avoiding memory allocations, mutexes, CAS-primitives, and false-sharing on the hot path 🍴
Languages: C++ (57.2%), Rust (32.2%), C (8.2%), CMake (2.3%), Python (0.1%)
.github/workflows
.github/workflows
.vscode
.vscode
c
c
cmake
cmake
include
include
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.cmake-format.py
.cmake-format.py
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
> README.md
# Fork Union 🍴
"Fork Union" is the low(est?)-latency [OpenMP](https://en.wikipedia.org/wiki/OpenMP)-style [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access)-aware minimalistic scoped thread-pool designed for 'Fork-Join' parallelism in C++, C, and Rust, avoiding × [mutexes & system calls](#locks-and-mutexes), × [dynamic memory allocations](#memory-allocations), × [CAS-primitives](#atomics-and-cas), and × [false-sharing](#) of CPU cache-lines on the hot path 🍴

Most "thread-pools" are not, in fact, thread-pools, but rather "task-queues" that are designed to synchronize a concurrent dynamically growing list of heap-allocated globally accessible shared objects.
In C++ terms, think of it as a `std::queue<std::function<void()>>` protected by a `std::mutex`, where each thread waits for the next task to be available and then executes it on some random core chosen by the OS scheduler.
All of that is slow... and true across C++, C, and Rust projects.
Short of OpenMP, practically every other solution has high dispatch latency and noticeable memory overhead.
OpenMP, however, is not ideal for fine-grained parallelism and is less portable than the C++ and Rust standard libraries.
This is where __`fork_union`__ comes in.
It's a C++ 17 library with C 99 and Rust bindings ([previously Rust implementation was standalone](#reimplementing-in-rust)).
It supports pinning nodes to specific NUMA nodes or individual CPU cores, making it much easier to ensure data locality and halving the latency of individual loads in Big Data applications.
## Basic Usage
__`Fork Union`__ is dead-simple to use!
There is no nested parallelism, exception handling, or "future promises"; they are banned.
The thread pool itself has a few core operations:
- `try_spawn` to initialize worker threads, and
- `for_threads` to launch a blocking callback on all threads.
Higher-level APIs for index-addressable tasks are also available:
- `for_n` - for individual evenly-sized tasks,
- `for_n_dynamic` - for individual unevenly-sized tasks,
- `for_slices` - for slices of evenly-sized tasks.
For additional flow control and tuning, following helpers are available:
- `sleep(microseconds)` - for longer naps,
- `terminate` - to kill the threads before the destructor is called,
- `unsafe_for_threads` - to broadcast a callback without blocking,
- `unsafe_join` - to block until the completion of the current broadcast.
On Linux, in C++, given the maturity and flexibility of the HPC ecosystem, it provides [NUMA extensions](#non-uniform-memory-access-numa).
That includes the `linux_colocated_pool` analog of the `basic_pool` and the `linux_numa_allocator` for allocating memory on a specific NUMA node.
Those are out-of-the-box compatible with the higher-level APIs.
Most interestingly, for Big Data applications, a higher-level `distributed_pool` class will address and balance the work across all NUMA nodes.
### Intro in Rust
A minimal example may look like this:
```rs
use fork_union as fu;
let mut pool = fu::spawn(2);
pool.for_threads(|thread_index, colocation_index| {
println!("Hello from thread # {} on colocation # {}", thread_index + 1, colocation_index + 1);
});
```
Higher-level APIs distribute index-addressable tasks across the threads in the pool:
```rs
pool.for_n(100, |prong| {
println!("Running task {} on thread # {}",
prong.task_index + 1, prong.thread_index + 1);
});
pool.for_slices(100, |prong, count| {
println!("Running slice [{}, {}) on thread # {}",
prong.task_index, prong.task_index + count, prong.thread_index + 1);
});
pool.for_n_dynamic(100, |prong| {
println!("Running task {} on thread # {}",
prong.task_index + 1, prong.thread_index + 1);
});
```
A safer `try_spawn_in` interface is recommended using the Allocator API.
A more realistic example may look like this:
```rs
use std::error::Error;
use fork_union as fu;
fn heavy_math(_: usize) {}
fn main() -> Result<(), Box<dyn Error>> {
let mut pool = fu::ThreadPool::try_spawn(4)?;
let mut pool = fu::ThreadPool::try_named_spawn("heavy-math", 4)?;
pool.for_n_dynamic(400, |prong| {
heavy_math(prong.task_index);
});
Ok(())
}
```
### Intro in C++
To integrate into your C++ project, either just copy the `include/fork_union.hpp` file into your project, add a Git submodule, or CMake.
For a Git submodule, run:
```bash
git submodule add https://github.com/ashvardanian/fork_union.git extern/fork_union
```
Alternatively, using CMake:
```cmake
FetchContent_Declare(
fork_union
GIT_REPOSITORY
https://github.com/ashvardanian/fork_union
)
FetchContent_MakeAvailable(fork_union)
target_link_libraries(your_target PRIVATE fork_union::fork_union)
```
Then, include the header in your C++ code:
```cpp
#include <fork_union.hpp> // `basic_pool_t`
#include <cstdio> // `stderr`
#include <cstdlib> // `EXIT_SUCCESS`
namespace fu = ashvardanian::fork_union;
int main() {
fu::basic_pool_t pool;
if (!pool.try_spawn(std::thread::hardware_concurrency())) {
std::fprintf(stderr, "Failed to fork the threads\n");
return EXIT_FAILURE;
}
// Dispatch a callback to each thread in the pool
pool.for_threads([&](std::size_t thread_index) noexcept {
std::printf("Hello from thread # %zu (of %zu)\n", thread_index + 1, pool.count_threads());
});
// Execute 1000 tasks in parallel, expecting them to have comparable runtimes
// and mostly co-locating subsequent tasks on the same thread. Analogous to:
//
// #pragma omp parallel for schedule(static)
// for (int i = 0; i < 1000; ++i) { ... }
//
// You can also think about it as a shortcut for the `for_slices` + `for`.
pool.for_n(1000, [](std::size_t task_index) noexcept {
std::printf("Running task %zu of 3\n", task_index + 1);
});
pool.for_slices(1000, [](std::size_t first_index, std::size_t count) noexcept {
std::printf("Running slice [%zu, %zu)\n", first_index, first_index + count);
});
// Like `for_n`, but each thread greedily steals tasks, without waiting for
// the others or expecting individual tasks to have same runtimes. Analogous to:
//
// #pragma omp parallel for schedule(dynamic, 1)
// for (int i = 0; i < 3; ++i) { ... }
pool.for_n_dynamic(3, [](std::size_t task_index) noexcept {
std::printf("Running dynamic task %zu of 1000\n", task_index + 1);
});
return EXIT_SUCCESS;
}
```
That's it.
For advanced usage, refer to the [NUMA section below](#non-uniform-memory-access-numa).
## Alternatives & Differences
Many other thread-pool implementations are more feature-rich but have different limitations and design goals.
- Modern C++: [`taskflow/taskflow`](https://github.com/taskflow/taskflow), [`progschj/ThreadPool`](https://github.com/progschj/ThreadPool), [`bshoshany/thread-pool`](https://github.com/bshoshany/thread-pool)
- Traditional C++: [`vit-vit/CTPL`](https://github.com/vit-vit/CTPL), [`mtrebi/thread-pool`](https://github.com/mtrebi/thread-pool)
- Rust: [`tokio-rs/tokio`](https://github.com/tokio-rs/tokio), [`rayon-rs/rayon`](https://github.com/rayon-rs/rayon), [`smol-rs/smol`](https://github.com/smol-rs/smol)
Those are not designed for the same OpenMP-like use cases as __`fork_union`__.
Instead, they primarily focus on task queuing, which requires significantly more work.
### Locks and Mutexes
Unlike the `std::atomic`, the `std::mutex` is a system call, and it can be expensive to acquire and release.
Its implementations generally have 2 executable paths:
- the fast path, where the mutex is not contended, where it first tries to grab the mutex via a compare-and-swap operation, and if it succeeds, it returns immediately.
- the slow path, where the mutex is contended, and it has to go through the kernel to block the thread until the mutex is available.
On Linux, the latter translates to ["futex"](https://en.wikipedia.org/wiki/Futex) ["system calls"](https://en.wikipedia.org/wiki/System_call), which is expensive.
### Memory Allocations
C++ has rich functionality for concurrent applications, like `std::future`, `std::packaged_task`, `std::function`, `std::queue`, `std::conditional_variable`, and so on.
Most of those, I believe, aren't unusable in Big-Data applications, where you always operate in memory-constrained environments:
- The idea of raising a `std::bad_alloc` exception when there is no memory left and just hoping that someone up the call stack will catch it is not a great design idea for any Systems Engineering.
- The threat of having to synchronize ~200 physical CPU cores across 2-8 sockets and potentially dozens of [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) nodes around a shared global memory allocator practically means you can't have predictable performance.
As we focus on a simpler ~~concurrency~~ parallelism model, we can avoid the complexity of allocating shared states, wrapping callbacks into some heap-allocated "tasks", and other boilerplate code.
Less work - more performance.
### Atomics and [CAS](https://en.wikipedia.org/wiki/Compare-and-swap)
Once you get to the lowest-level primitives on concurrency, you end up with the `std::atomic` and a small set of hardware-supported atomic instructions.
Hardware implements it differently:
- x86 is built around the "Total Store Order" (TSO) [memory consistency model](https://en.wikipedia.org/wiki/Memory_ordering) and provides `LOCK` variants of the `ADD` and `CMPXCHG`, which act as full-blown "fences" - no loads or stores can be reordered across it.
- Arm, on the other hand, has a "weak" memory model and provides a set of atomic instructions that are not fences, that match the C++ concurrency model, offering `acquire`, `release`, and `acq_rel` variants of each atomic instruction—such as `LDADD`, `STADD`, and `CAS` - which allow precise control over visibility and order, especially with the introduction of "Large System Extension" (LSE) instructions in Armv8.1.
In practice, a locked atomic on x86 requires the cache line in the Exclusive state in the requester's L1 cache.
This would incur a coherence transaction (Read-for-Ownership) if some other core had the line.
Both Intel and AMD handle this similarly.
It makes [Arm and Power much more suitable for lock-free programming](https://arangodb.com/2021/02/cpp-memory-model-migrating-from-x86-to-arm/) and concurrent data structures, but some observations hold for both platforms.
Most importantly, "Compare and Swap" (CAS) is a costly operation and should be avoided whenever possible.
On x86, for example, the `LOCK ADD` [can easily take 50 CPU cycles](https://travisdowns.github.io/blog/2020/07/06/concurrency-costs), being 50x slower than a regular `ADD` instruction, but still easily 5-10x faster than a `LOCK CMPXCHG` instruction.
Once contention rises, the gap naturally widens and is further amplified by the increased "failure" rate of the CAS operation, particularly when the value being compared has already changed.
That's why, for the "dynamic" mode, we resort to using an additional atomic variable as opposed to more typical CAS-based implementations.
### Alignment & False Sharing
The thread-pool needs several atomic variables to synchronize the state.
It those variables are located on the same cache line, they will be "falsely shared" between threads.
This means that when one thread updates one of the variables, it will invalidate the cache line in all other threads, causing them to reload it from memory.
This is a common problem, and the C++ standard recommends addressing it with `alignas(std::hardware_destructive_interference_size)` for your hot variables.
There are, however, caveats.
The `std::hardware_destructive_interference_size` is [generally 64 bytes on x86](https://stackoverflow.com/a/39887282), matching the size of a single cache line.
But in reality, on most x86 machines, [depending on the BIOS "spatial prefetcher" settings](https://www.techarp.com/bios-guide/cpu-adjacent-sector-prefetch/), will [fetch 2 cache lines at a time starting with Sandy Bridge](https://stackoverflow.com/a/72127222).
Because of these rules, padding hot variables to 128 bytes is a conservative but often sensible defensive measure adopted by Folly's `cacheline_align` and Java's `jdk.internal.vm.annotation.Contended`. 
## Pro Tips
### Non-Uniform Memory Access (NUMA)
Handling NUMA isn't trivial and is only supported on Linux with the help of the [`libnuma` library](https://github.com/numactl/numactl).
It provides the `mbind` interface to pin specific memory regions to particular NUMA nodes, as well as helper functions to query the system topology, which are exposed via the `fork_union::numa_topology` template.
Let's say you are working on a Big Data application, like brute-forcing Vector Search using the [SimSIMD](https://github.com/ashvardanian/simsimd) library on a 2 dual-socket CPU system, similar to [USearch](https://github.com/unum-cloud/usearch/pulls).
The first part of that program may be responsible for sharding the incoming stream of data between distinct memory regions.
That part, in our simple example will be single-threaded:
```cpp
#include <vector> // `std::vector`
#include <span> // `std::span`
#include <fork_union.hpp> // `linux_numa_allocator`, `numa_topology_t`, `linux_distributed_pool_t`
#include <simsimd/simsimd.h> // `simsimd_f32_cos`, `simsimd_distance_t`
namespace fu = ashvardanian::fork_union;
using floats_alloc_t = fu::linux_numa_allocator<float>;
constexpr std::size_t dimensions = 768; /// Matches most BERT-like models
static std::vector<float, floats_alloc_t> first_half(floats_alloc_t(0));
static std::vector<float, floats_alloc_t> second_half(floats_alloc_t(1));
static fu::numa_topology_t numa_topology;
static fu::linux_distributed_pool_t distributed_pool;
/// Dynamically shards incoming vectors across 2 nodes in a round-robin fashion.
void append(std::span<float, dimensions> vector) {
bool put_in_second = first_half.size() > second_half.size();
if (put_in_second) second_half.insert(second_half.end(), vector.begin(), vector.end());
else first_half.insert(first_half.end(), vector.begin(), vector.end());
}
```
The concurrent part would involve spawning threads adjacent to every memory pool to find the best `search_result_t`.
The primary `search` function, in ideal world would look like this:
1. Each thread finds the best match within its "slice" of a NUMA node, tracking the best distance and index in a local CPU register.
2. All threads in each NUMA node atomically synchronize using a NUMA-local instance of `search_result_t`.
3. The main thread collects aggregates of partial results from all NUMA nodes.
That is, however, overly complicated to implement.
Such tree-like hierarchical reductions are optimal in a theoretical sense. Still, assuming the relative cost of spin-locking once at the end of a thread scope and the complexity of organizing the code, the more straightforward path is better.
A minimal example would look like this:
```cpp
/// On each NUMA node we'll synchronize the threads
struct search_result_t {
simsimd_distance_t best_distance {std::numeric_limits<simsimd_distance_t>::max()};
std::size_t best_index {0};
};
inline search_result_t pick_best(search_result_t const& a, search_result_t const& b) noexcept {
return a.best_distance < b.best_distance ? a : b;
}
/// Uses all CPU threads to search for the closest vector to the @p query.
search_result_t search(std::span<float, dimensions> query) {
bool const need_to_spawn_threads = !distributed_pool.count_threads();
if (need_to_spawn_threads) {
assert(numa_topology.try_harvest() && "Failed to harvest NUMA topology");
assert(numa_topology.count_nodes() == 2 && "Expected exactly 2 NUMA nodes");
assert(distributed_pool.try_spawn(numa_topology, sizeof(search_result_t)) && "Failed to spawn NUMA pools");
}
search_result_t result;
fu::spin_mutex_t result_update; // ? Lighter `std::mutex` alternative w/out system calls
auto concurrent_searcher = [&](auto first_prong, std::size_t count) noexcept {
auto [first_index, _, colocation] = first_prong;
auto& vectors = colocation == 0 ? first_half : second_half;
search_result_t thread_local_result;
for (std::size_t task_index = first_index; task_index < first_index + count; ++task_index) {
simsimd_distance_t distance;
simsimd_f32_cos(query.data(), vectors.data() + task_index * dimensions, dimensions, &distance);
thread_local_result = pick_best(thread_local_result, {distance, task_index});
}
// ! We are spinning on a remote cache line... for simplicity.
std::lock_guard<fu::spin_mutex_t> lock(result_update);
result = pick_best(result, thread_local_result);
};
auto _ = distributed_pool[0].for_slices(first_half.size() / dimensions, concurrent_searcher);
auto _ = distributed_pool[1].for_slices(second_half.size() / dimensions, concurrent_searcher);
return result;
}
```
In a dream world, we would call `distributed_pool.for_n`, but there is no clean way to make the scheduling processes aware of the data distribution in an arbitrary application, so that's left to the user.
Calling `linux_colocated_pool::for_slices` on individual NUMA-node-specific colocated pools is the cheapest general-purpose recipe for Big Data applications.
For more flexibility around building higher-level low-latency systems, there are unsafe APIs expecting you to manually "join" the broadcasted calls, like `unsafe_for_threads` and `unsafe_join`.
Instead of hard-coding the `distributed_pool[0]` and `distributed_pool[1]`, we can iterate through them without keeping the lifetime-preserving handle to the passed `concurrent_searcher`:
```cpp
for (std::size_t colocation = 0; colocation < distributed_pool.colocations_count(); ++colocation)
distributed_pool[colocation].unsafe_for_threads(..., concurrent_searcher);
for (std::size_t colocation = 0; colocation < distributed_pool.colocations_count(); ++colocation)
distributed_pool[colocation].unsafe_join();
```
### Efficient Busy Waiting
Here's what "busy waiting" looks like in C++:
```cpp
while (!has_work_to_do())
std::this_thread::yield();
```
On Linux, the `std::this_thread::yield()` translates into a `sched_yield` system call, which means context switching to the kernel and back.
Instead, you can replace the `standard_yield_t` STL wrapper with a platform-specific "yield" instruction, which is much cheaper.
Those instructions, like [`WFET` on Arm](https://developer.arm.com/documentation/ddi0602/2025-03/Base-Instructions/WFET--Wait-for-event-with-timeout-), generally hint the CPU to transition to a low-power state.
| Wrapper | ISA | Instruction | Privileges |
| ------------------ | ------------ | ----------- | ---------- |
| `x86_yield_t` | x86 | `PAUSE` | R3 |
| `x86_tpause_1us_t` | x86+WAITPKG | `TPAUSE` | R3 |
| `arm64_yield_t` | AArch64 | `YIELD` | EL0 |
| `arm64_wfet_t` | AArch64+WFXT | `WFET` | EL0 |
| `riscv_yield_t` | RISC-V | `PAUSE` | U |
No kernel calls.
No futexes.
Works in tight loops.
## Performance
One of the most common parallel workloads is the N-body simulation ¹.
Implementations are available in both C++ and Rust in `scripts/nbody.cpp` and `scripts/nbody.rs`, respectively.
Both are lightweight and involve little logic outside of number-crunching, so both can be easily profiled with `time` and introspected with `perf` Linux tools.
Additional NUMA-aware Search examples are available in `scripts/search.rs`.
---
C++ benchmarking results for $N=128$ bodies and $I=1e6$ iterations:
| Machine | OpenMP (D) | OpenMP (S) | Fork Union (D) | Fork Union (S) |
| :------------- | ---------: | ---------: | -------------: | -------------: |
| 16x Intel SPR | 20.3s | 16.0s | 18.1s | 10.3s |
| 12x Apple M2 | ? | 1m:16.7s | 1m:30.3s ² | 1m:40.7s ² |
| 96x Graviton 4 | 32.2s | 20.8s | 39.8s | 26.0s |
Rust benchmarking results for $N=128$ bodies and $I=1e6$ iterations:
| Machine | Rayon (D) | Rayon (S) | Fork Union (D) | Fork Union (S) |
| :------------- | --------: | --------: | -------------: | -------------: |
| 16x Intel SPR | 51.4s | 38.1s | 15.9s | 9.8s |
| 12x Apple M2 | 3m:23.5s | 2m:0.6s | 4m:8.4s | 1m:20.8s |
| 96x Graviton 4 | 2m:13.9s | 1m:35.6s | 18.9s | 10.1s |
> ¹ Another common workload is "Parallel Reductions" covered in a separate [repository](https://github.com/ashvardanian/ParallelReductionsBenchmark).
> ² When a combination of performance and efficiency cores is used, dynamic stealing may be more efficient than static slicing.
You can rerun those benchmarks with the following commands:
```bash
cmake -B build_release -D CMAKE_BUILD_TYPE=Release
cmake --build build_release --config Release
time NBODY_COUNT=128 NBODY_ITERATIONS=1000000 NBODY_BACKEND=fork_union_static build_release/fork_union_nbody
time NBODY_COUNT=128 NBODY_ITERATIONS=1000000 NBODY_BACKEND=fork_union_dynamic build_release/fork_union_nbody
```
## Safety & Logic
There are only 3 core atomic variables in this thread-pool, and 1 for dynamically-stealing tasks.
Let's call every invocation of a `for_*` API - a "fork", and every exit from it a "join".
| Variable | Users Perspective | Internal Usage |
| :----------------- | :--------------------------- | :------------------------------------ |
| `stop` | Stop the entire thread-pool | Tells workers when to exit the loop |
| `fork_generation` | "Forks" called since init | Tells workers to wake up on new forks |
| `threads_to_sync` | Threads not joined this fork | Tells main thread when workers finish |
| `dynamic_progress` | Progress within this fork | Tells workers which jobs to take |
__Why don't we need atomics for "total_threads"?__
The only way to change the number of threads is to `terminate` the entire thread-pool and then `try_spawn` it again.
Either of those operations can only be called from one thread at a time and never coincide with any running tasks.
That's ensured by the `stop`.
__Why don't we need atomics for a "job pointer"?__
A new task can only be submitted from one thread that updates the number of parts for each new fork.
During that update, the workers are asleep, spinning on old values of `fork_generation` and `stop`.
They only wake up and access the new value once `fork_generation` increments, ensuring safety.
__How do we deal with overflows and `SIZE_MAX`-sized tasks?__
The library entirely avoids saturating multiplication and only uses one saturating addition in "release" builds.
To test the consistency of arithmetic, the C++ template class can be instantiated with a custom `index_t`, such as `std::uint8_t` or `std::uint16_t`.
In the former case, no more than 255 threads can operate, and no more than 255 tasks can be addressed, allowing us to easily test every weird corner case of [0:255] threads competing for [0:255] tasks.
__Why not reimplement it in Rust?__
The original Rust implementation was a standalone library, but in essence, Rust doesn't feel designed for parallelism, concurrency, and expert Systems Engineering.
It enforces stringent safety rules, which is excellent for building trustworthy software, but realistically, it makes lock-free concurrent programming with minimal memory allocations too complicated.
Now, the Rust library is a wrapper over the C binding of the C++ core implementation.
## Testing and Benchmarking
To run the C++ tests, use CMake:
```bash
cmake -B build_release -D CMAKE_BUILD_TYPE=Release
cmake --build build_release --config Release -j
ctest --test-dir build_release # run all tests
build_release/fork_union_nbody # run the benchmarks
```
For C++ debug builds, consider using the VS Code debugger presets or the following commands:
```bash
cmake -B build_debug -D CMAKE_BUILD_TYPE=Debug
cmake --build build_debug --config Debug # build with Debug symbols
build_debug/fork_union_test_cpp20 # run a single test executable
```
To run static analysis:
```bash
sudo apt install cppcheck clang-tidy
cmake --build build_debug --target cppcheck # detects bugs & undefined behavior
cmake --build build_debug --target clang-tidy # suggest code improvements
```
To include NUMA, Huge Pages, and other optimizations on Linux, make sure to install dependencies:
```bash
sudo apt-get -y install libnuma-dev libnuma1 # NUMA
sudo apt-get -y install libhugetlbfs-dev libhugetlbfs-bin # Huge Pages
sudo ln -s /usr/bin/ld.hugetlbfs /usr/share/libhugetlbfs/ld # Huge Pages linker
```
To build with an alternative compiler, like LLVM Clang, use the following command:
```bash
sudo apt-get install libomp-15-dev clang++-15 # OpenMP version must match Clang
cmake -B build_debug -D CMAKE_BUILD_TYPE=Debug -D CMAKE_CXX_COMPILER=clang++-15
cmake --build build_debug --config Debug
build_debug/fork_union_test_cpp20
```
For Rust, use the following command:
```bash
rustup toolchain install # for Alloc API
cargo miri test # to catch UBs
cargo test --release # to run the tests fast
```
|
https://github.com/NimbleEdge/sparse_transformers
|
sparse_transformers
Sparse Inferencing for transformer based LLMs
Languages: Python (83.0%), C++ (8.7%), Cuda (4.9%), Shell (3.1%), CMake (0.3%)
.github
.github
benchmarks
benchmarks
configs
configs
sparse_transformers
sparse_transformers
src
src
...
.gitignore
.gitignore
CODE_OF_CONDUCT.md
CODE_OF_CONDUCT.md
CONTRIBUTING.md
CONTRIBUTING.md
LICENSE
LICENSE
README.md
README.md
> README.md
[](https://discord.gg/y8WkMncstk)
# Fused Sparse C++ Kernels for Transformers
## Overview
The project implements sparse multiplication and fuses up/down projections in the MLP layers through low rank weight activations.
Work is based on [Deja Vu](https://arxiv.org/abs/2310.17157) and Apple's [LLM in a Flash](https://arxiv.org/abs/2312.11514).
### Benefits
- **1.6-1.8x overall gain in TTFT and TPS** (4-5x gain in MLP Inference)
- **26.4%** reduction in memory usage
- **6.7×** faster index selection and replacement for weight caching
```
┌─────────────────────────────────────────────────────────────────┐
│ Sparse LLM Inference Pipeline │
├─────────────────────────────────────────────────────────────────┤
│ Sparsity Selection │
│ ├─ Hidden States → LoRA Projection (Importance Scoring) │
│ ├─ Binary Mask Generation: (scores > threshold) │
│ └─ Mask Normalization: Union across batch dimension │
├─────────────────────────────────────────────────────────────────┤
│ Differential Weight Caching │
│ ├─ Mask Change Detection: XOR with previous mask │
│ ├─ Paired Replacement: Direct substitution algorithm │
│ └─ Zero-Copy Tensor Views: torch::from_blob references │
├─────────────────────────────────────────────────────────────────┤
│ Sparse Computation │
│ ├─ Concatenated Gate+Up Projection (Fused Operation) │
│ ├─ Element-wise Activation: σ(gate) ⊙ up │
│ └─ Sparse Down Projection: Only active intermediate dims │
└─────────────────────────────────────────────────────────────────┘
```
**Keywords:** Large Language Models, Sparse Inference, Differential Weight Caching
## Performance Benchmarks
State of Implementation:
- [x] Torch CPU kernels for fp16, fp32
- [x] Differential weight caching and selection for dynamic sparsity
- [ ] CUDA kernels for Sparse Inferencing
- [ ] CPU kernels for int8, int32, int64
### CPU Performance
```
Sparse LLaMA 3.2 3B vs LLaMA 3.2 3B (on HuggingFace Implementation):
- Time to First Token (TTFT): 1.51× faster (1.209s → 0.803s)
- Output Generation Speed: 1.79× faster (0.7 → 1.2 tokens/sec)
- Total Throughput: 1.78× faster (0.7 → 1.3 tokens/sec)
- Memory Usage: 26.4% reduction (13.25GB → 9.75GB)
```
### GPU Performance
```
Sparse LLaMA 3.2 3B vs Standard LLaMA 3.2 3B CUDA Results (on HuggingFace Implementation):
- Average time (Sparse): 0.021s
- Average time (Standard): 0.018s
- CUDA Speedups: 0.86x (WIP)
```
## Usage
### Quick Benchmark
```bash
# Run comprehensive benchmark
python benchmark.py \
--device cpu \ # Device: 'cpu' or 'cuda'
--config configs/llama_skip_causal_3b.json \ # Model configuration
--num_runs 50 \ # Number of benchmark runs
--verbose True # Detailed timing output
# Expected output:
# ⚡ TTFT Speedup: 1.51x
# 🚀 Output TPS Speedup: 1.79x
# 📊 Total Throughput Speedup: 1.78x
```
## Implementation Details
### Paired Replacement with Differential Caching
_sparse_transformers/csrc/weight_cache.h_
The weight cache is a class that manages the active weights for the sparse MLP. It differentially updates the MLP tensor memory pool for the next token based on the predicted sparsity mask.
```cpp
class WeightCache {
// Paired replacement algorithm for differential updates
void update_active_weights(const torch::Tensor &mask)
};
```
**Performance Impact:**
- **6.7× faster cache updates**: 29.89ms (naive `index_select`) → 4.46ms (paired replacement)
- **Better cache locality**: Row major for Up Projection and Column major for Down Projection Matrices
- **Contiguous Memory Access**: Single memcpy for cache updates
### Sparse MLP Inference
_sparse_transformers/csrc/sparse_mlp_op.cpp_
```python
sparse_mlp_forward(
x.detach(),
self.weight_cache.get_concat_weight(),
self.weight_cache.get_active_down_weight(),
self.down_proj_buffer,
self.combined_proj_buffer,
"silu"
)
```
**Performance Impact:**
- **5× faster CPU MLP inference**: 30.1ms → 6.02ms
- OpenMP parallelization with `torch::at::parallel_for`
- Bounded memory usage with weight cache memory pool
## Project Structure
```
├── sparse_transformers/ # C++ extension module
│ ├── csrc/
│ │ ├── sparse_mlp_op.cpp # Main CPU/CUDA dispatcher
│ │ ├── sparse_mlp_cuda.cu # CUDA kernels
│ │ └── weight_cache.h # Paired replacement caching
│ ├── __init__.py # Python bindings
│ └── CMakeLists.txt # Build configuration
├── src/models/llama/
│ ├── modelling_llama_skip.py # Statistical sparsity model
│ └── configuration_llama_skip.py # Model configuration
├── tools/
│ └── component_timing.py # Performance profiling
└── run_benchmark.py # End-to-end benchmarks
```
## Installation
### Build C++ Extensions
```bash
# Clone repository
git clone https://github.com/nimbleedge/sparse_transformers.git
cd sparse_transformers
```
Set up conda environment and install dependencies
```bash
conda create -n sparse_transformers python=3.10
conda activate sparse_transformers
```
Install torch dependencies from [requirements.txt](requirements.txt#L2)
```bash
# Install in editable mode (builds C++ extensions automatically)
pip install -r requirements.txt
pip install -e . # Auto-detect (prefer GPU if available)
pip install -e . --build-option=cpu # Force CPU-only build
pip install -e . --build-option=gpu # Force GPU build (fallback to CPU if not available)
# Alternative: Direct setup.py commands
python setup.py develop # Auto-detect (prefer GPU if available)
python setup.py develop cpu # Force CPU-only build
python setup.py develop gpu # Force GPU build (fallback to CPU if not available)
# Verify installation
python -c "import sparse_transformers; print('✅ Installation successful')"
```
## Community engagement
We welcome any feedback or suggestions - please join our
[Discord](https://discord.gg/y8WkMncstk) to engage with the community.
## Contributing
We welcome contributions from the community! Areas of particular interest are:
- **Additional models**: Extend beyond LLaMA to other architectures
- **Quantization**: Combine with INT8/FP16 optimizations
- **Attention Kernels**: Implement Sparse Attention Kernels
Please read our [Contributing Guidelines](CONTRIBUTING.md) to get started.
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
|
https://github.com/adgaultier/caracal
|
caracal
Make your programs stealthier🐝
Languages: Rust (94.6%), C (4.4%), Just (1.0%)
.github
.github
caracal-common
caracal-common
caracal-ebpf
caracal-ebpf
caracal
caracal
...
.gitignore
.gitignore
Cargo.toml
Cargo.toml
Justfile
Justfile
LICENSE
LICENSE
README.md
README.md
> README.md
<div align="center">
<h1>Caracal</h1>
<h3>Make your programs stealthier </h3>
<img src="https://github.com/user-attachments/assets/089060da-1a14-475d-8aa3-e1bfae15e8f7" style="width: 60%; height: auto;">
<p><small><i>The caracal cat is one of Africa's ultimate hunters,<br> a stealthy cat with an exceptional ability to hunt out prey on the savanna</i></small></p>
</div>
⚡ Powered by [Aya](https://aya-rs.dev)🐝
## 💡 Overview
Caracal is a Rust implementation of eBPF techniques that:
1. hide target bpf programs & maps → won't be visible with `bpftop`, `bpftool` ...
2. hide target processes → won't be visible with `ps`, `top`, `procs`, `ls /proc` ...
3. are resilient to some "unhiding" bruteforce techniques
## 📚 Documentation
Jump to:
- [Focus on 1 & 2](caracal/README.md)
- [Focus on 3](caracal-ebpf/src/deunhide/README.md)
## 🚀 Setup
You need a Linux based OS.
### ⚒️ Build from source
To build from source, make sure you have:
- [bpf-linker](https://github.com/aya-rs/bpf-linker) installed.
- [rust](https://www.rust-lang.org/tools/install) installed with `nightly` toolchain.
#### 1. Build ebpf program
```
cd caracal-ebpf && cargo build --release
```
#### 2. Build user space program
```
cargo build --release
```
This command will produce `caracal` executable in `target/release` that you can add to your`$PATH`
### 📥 Binary release
You can download the pre-built binaries from the [release page](https://github.com/adgaultier/caracal/releases)
<br>
## 🪄 Usage
Run `caracal` with root privileges:
```
caracal --pid <pids> --bpf-prog-id <bpf-ids> -v
```
- `<pids>`: List of process IDs to hide (comma-separated, e.g., 123,456)
- `<bpf-ids>`: List of eBPF program IDs to hide (comma-separated, e.g., 789,101)
- `-v / --verbose`: Verbosity
Example:
```
sudo caracal --pid $PPID,1337 --bpf-prog-id 23,24,26 -v
```
will hide:
- `caracal` launching process & its children
- 1337 process & its children
- `caracal` eBPF program & maps
- 23,24,26 eBPF programs & maps
## ⚠️ Disclaimer
`caracal` is developed for educational purposes only
<br>
## ✍️ Authors
[Adrien Gaultier](https://github.com/adgaultier)
<br>
## ⚖️ License
GPLv3
|
https://github.com/iilegacyyii/DataInject-BOF
|
DataInject-BOF
Hijacks code execution via overwriting Control Flow Guard pointers in combase.dll
Languages: C (98.6%), Makefile (1.4%)
dist
dist
...
.gitattributes
.gitattributes
.gitignore
.gitignore
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
> README.md
# Data Inject BOF
A beacon object file implementation of the process injection proof-of-concept from my blog post [Control Flow Hijacking via Data Pointers](https://www.legacyy.xyz/defenseevasion/windows/2025/04/16/control-flow-hijacking-via-data-pointers.html).
Hijacks control flow via overwriting `combase.dll`'s Control Flow Guard function pointers called by COM proxying functions.
## Important Notes
- From my testing, `explorer.exe` is the current best candidate in terms of an easy triggering mechanism due to its heavy reliance on COM proxying. Would recommend experimenting.
- **Make sure** shellcode is 64-bit as this BOF only supports 64-bit beacons & target processes.
- This has only been tested on windows versions `Win10 21H2 (19044.5737)` & `Win11 24H2 (26100.3775)`.
## Usage
```
datainject <pid> <shellcode path>
```
### Examples
For sake of example, all process id's have been assumed to be `1234`
**Inject into explorer.exe, execute shellcode upon COM call (can be triggered by right clicking or opening file explorer)**
```
datainject 1234 C:\users\attacker\payloads\beacon_x64.bin
```
## References
- [Control Flow Hijacking via Data Pointers](https://www.legacyy.xyz/defenseevasion/windows/2025/04/16/control-flow-hijacking-via-data-pointers.html) - My blog post teaching my methodology to weaponising this.
- [Threadless Inject](https://github.com/CCob/ThreadlessInject) - The project that inspired me to start this research.
|
https://github.com/tchebb/openwv
|
openwv
Open reimplementation of Google's Widevine Content Decryption Module for browsers
Languages: Rust (94.4%), C++ (5.6%)
src
src
third-party
third-party
...
.clang-format
.clang-format
.gitignore
.gitignore
.gitmodules
.gitmodules
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
> README.md
OpenWV is a free and open-source reimplementation of Google's Widevine Content
Decryption Module (CDM), the portion of the Widevine DRM system that runs in
your browser, obtains content keys for protected media, and decrypts the media
using those keys. OpenWV is a drop-in replacement for Google's [official,
proprietary CDM][official-cdm] and implements the same [shared library
API][chromium-cdm-api].
OpenWV does **not** come with a device identity and will not work without one.
A device identity, typically stored as a [`.wvd` file][pywidevine], contains
metadata about a Widevine client as well as a private key that authenticates
that client to Widevine license servers. Depending on the client's identity, a
license server may return low-value content keys (e.g. standard definition
only), high-value keys (e.g. HD/UHD), or no keys at all. If you want to use
OpenWV, you must obtain an appropriate `.wvd` file yourself and include it in
the build as described below.
[official-cdm]: https://github.com/mozilla-firefox/firefox/blob/main/toolkit/content/gmp-sources/widevinecdm.json
## Compilation
Because CDM libraries are heavily sandboxed by browsers, OpenWV cannot read
configuration from disk at runtime. That means that all configuration,
including the device identity mentioned above, must be present at build-time.
As such, there are no official precompiled binaries: **the only way to use
OpenWV is to build it yourself**.
To build OpenWV, follow these steps:
1. Make sure that [Git][git], [Rust][rust], and [Clang][clang-install] are
installed on your system. (To install Clang on Windows 10/11, run
`winget install LLVM.LLVM`.)
2. Clone this repository and its submodule, telling Git to keep the two in sync:
`git clone --recurse-submodules -c submodule.recurse=true https://github.com/tchebb/openwv.git`
3. Place your `.wvd` file in the project root (alongside this README) and name
it `embedded.wvd`. You may set other configuration options as desired by
editing the `CONFIG` variable in `src/config.rs`.
4. Build the library: `cargo build --release`
5. Find the built library in `target/release/`. Depending on your OS, it will
be named `libwidevinecdm.so`, `widevinecdm.dll`, or `libwidevinecdm.dylib`.
[git]: https://git-scm.com/downloads
[rust]: https://rustup.rs/
[clang-install]: https://rust-lang.github.io/rust-bindgen/requirements.html#installing-clang
## Installation
*NOTE: In these instructions, "the OpenWV library" means the library you built
in the last section—`libwidevinecdm.so` on Linux, `widevinecdm.dll` on Windows,
or `libwidevinecdm.dylib` on macOS.*
### Firefox
1. Open `about:support` and note your "Profile Directory".
2. Open `about:config`. Set `media.gmp-widevinecdm.autoupdate` to `false`
(creating it if needed), and set `media.gmp-widevinecdm.version` to `openwv`
(or to any other name for the directory you create in step 3).
3. Navigate to `gmp-widevinecdm/` within your profile directory.
4. Create a subdirectory named `openwv` and place the OpenWV library and
`manifest-firefox.json`, renamed to `manifest.json`, inside it. Note that
you **must** use OpenWV's `manifest.json` instead of Google's, as Firefox
will not play video if we falsely advertise decoding support.
**If you manually check for addon updates, Firefox will replace OpenWV with
Google's CDM**. The `media.gmp-widevinecdm.autoupdate` setting prevents
automatic updates, but [there's no way][firefox-updater] to prevent manual
updates. If this happens, you need only set `media.gmp-widevinecdm.version` back
to `openwv`—no need to repeat the other steps.
### Chrome/Chromium
1. Open `chrome://version/` and note the **parent** directory of your "Profile
Path". This is Chrome's "User Data Directory".
2. Navigate to `WidevineCdm/` within the User Data Directory.
3. If there are any existing subdirectories, delete them.
4. Create a subdirectory named `9999` (or any numeric version greater than that
of Google's CDM), and place OpenWV's `manifest-chromium.json`, renamed to
`manifest.json`, inside it.
5. Beside `manifest.json`, create a directory named `_platform_specific` with
a directory named `{linux,win,mac}_{x86,x64,arm,arm64}`, as appropriate,
inside it. For example, `_platform_specific/linux_x64/` on 64-bit Intel
Linux. Place the OpenWV library in this innermost directory.
6. On Linux only, launch and quit the browser once before playing any
Widevine-protected media. OpenWV will not be loaded on the first launch due
to an [implementation quirk][chromium-hint] of Chromium.
### Kodi (via [InputStream Adaptive](https://github.com/xbmc/inputstream.adaptive))
1. Build OpenWV with `encrypt_client_id: EncryptClientId::Never`, as Kodi
cannot handle service certificate request messages as of this writing
(InputStream Adaptive v21.5.10).
2. In Kodi, navigate to "Add-ons > My add-ons > VideoPlayer InputStream >
InputStream Adaptive" and select "Configure".
3. Ensure the settings level (the gear icon) is set to at least "Advanced".
4. In the "Expert" tab, set "Decrypter path" to the directory where you've put
the OpenWV library. Don't include the library name itself.
[firefox-updater]: https://github.com/mozilla-firefox/firefox/blob/FIREFOX_139_0_RELEASE/toolkit/mozapps/extensions/internal/GMPProvider.sys.mjs#L391-L455
[chromium-hint]: https://source.chromium.org/chromium/chromium/src/+/refs/tags/137.0.7151.59:chrome/common/media/cdm_registration.cc;l=163-187
## References
The APIs, algorithms, and data types used in OpenWV were gathered from a
variety of official and unofficial sources:
- API headers (`third-party/cdm/`) come from [the Chromium source][chromium-cdm-api].
- Widevine protobuf definitions (`third-party/widevine_protos.pb`) were
extracted from `chromecast_oss/chromium/src/out_chromecast_steak/release/pyproto/`
in Google's [Chromecast Ultra v1.42 source drop][steak-1.42-oss].
- The `.wvd` format and many algorithmic details come from the [pywidevine][pywidevine]
project.
[chromium-cdm-api]: https://chromium.googlesource.com/chromium/cdm/
[pywidevine]: https://github.com/devine-dl/pywidevine/
[steak-1.42-oss]: https://drive.google.com/file/d/153TuZqh9FTBKRabGx686tbJefeqM2sJf/view?usp=drive_link
|
https://github.com/ASIG-X/RESPLE
|
RESPLE
The first 6-DoF spline-based recursive motion esimator for LiDAR-based odometry
Languages: C++ (92.8%), Python (4.4%), CMake (2.2%), Dockerfile (0.6%)
AviaResple_msgs
AviaResple_msgs
HAP360_msgs
HAP360_msgs
Mid70Avia_msgs
Mid70Avia_msgs
doc
doc
estimate_msgs
estimate_msgs
...
.gitignore
.gitignore
Dockerfile
Dockerfile
LICENSE
LICENSE
README.md
README.md
> README.md
# RESPLE: Recursive Spline Estimation for LiDAR-Based Odometry
[**YouTube**](https://youtu.be/3-xLRRT25ys) | **[arXiv](https://arxiv.org/abs/2504.11580)** | **[Website](https://asig-x.github.io/resple_web/)**
This is the offcial repository for RESPLE, the first B-spline-based recursive state estimation framework for estimating 6-DoF dynamic motions. Using RESPLE as the estimation backbone, we developed a unified suite of direct LiDAR-based odometry systems, including:
* LiDAR-only odometry (LO)
* LiDAR-inertial odometry (LIO)
* Multi-LiDAR odometry (MLO)
* Multi-LiDAR-inertial Odometry (MLIO)
These four variants have been tested in real-world datasets and our own experiments, covering aerial, wheeled, legged, and wearable platforms operating in indoor, urban, wild environments with diverse LiDAR types. We look forward to your comments and feedback!
### BibTex Citation
```
@ARTICLE{cao2025resple,
author={Cao, Ziyu and Talbot, William and Li, Kailai},
title={RESPLE: Recursive Spline Estimation for LiDAR-Based Odometry},
journal={arXiv preprint arXiv:2504.11580},
year={2025}
}
```
### Dependencies
Tested with [ROS2 Humble](https://docs.ros.org/en/humble/Installation.html) on Ubuntu 22.04
```
sudo apt install libomp-dev libpcl-dev libeigen3-dev
sudo apt install ros-humble-pcl*
# Optional: sudo apt install ros-humble-rosbag2-storage-mcap (for playing .mcap file if testing GrandTour dataset)
```
### Compilation
```
cd ~/ros2_ws/src
git clone --recursive git@github.com:ASIG-X/RESPLE.git
cd ..
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select estimate_msgs livox_ros_driver livox_interfaces livox_ros_driver2 resple
```
## Docker Build
To build a docker image capable of running the examples and dataset:
```bash
cd ~/path/to/src
git clone --recursive git@github.com:ASIG-X/RESPLE.git
cd RESPLE
docker build --ssh default --tag resple .
```
## Own experimental datasets ([LINK to SURFdrive](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l))
Password: RESPLE2025
<!--  -->
<!-- [](https://youtu.be/2OvjGnxszf8) -->
<div align="left">
<img src="doc/hemdyn_clip.gif" width=49.6% />
<img src="doc/Rcampus_clip.gif" width = 49.6% >
</div>
<br>
**HelmDyn (Helm Dynamic) dataset**
* 1 Livox Mid360 mounted on a helmet as a mobile platform
* 10 sequences recorded with very dynamic motions combining walking, running, jumping, and in-hand waving within a cubic space
* Ground truth trajectory recorded using a high-precision (submillimeter), low-latency motion capture system (Qualisys) invovling 20 cameras
**R-Campus dataset**
* 1 Livox Avia mounted on a bipedal wheeled robot (Direct Drive DIABLO)
* 1 sequence in walking speed recorded in a large-scale campus environment
* Trajectory starts and ends at the same location point.
## Usage
For LIO use, change `if_lidar_only` in `resple/config/config_xxx.yaml` to `false`.
* [HelmDyn](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l) dataset (Livox Mid360)
```
source install/setup.bash
ros2 launch resple resple_helmdyn01.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [R-Campus](https://surfdrive.surf.nl/files/index.php/s/lfXfApqVXTLIS9l) dataset (Livox Avia)
```
source install/setup.bash
ros2 launch resple resple_r_campus.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [NTU VIRAL](https://ntu-aris.github.io/ntu_viral_dataset/) dataset (OUSTER OS1-16)
```
source install/setup.bash
ros2 launch resple resple_eee_02.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* [MCD](https://mcdviral.github.io/) dataset (Livox Mid70)
```
source install/setup.bash
ros2 launch resple resple_ntu_day_01.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/bag/
```
* GrandTour (Hesai XT32, Livox Mid360)
```
source install/setup.bash
ros2 launch resple resple_heap_testsite_hoenggerberg.launch.py
# ros2 launch resple resple_jungfraujoch_tunnel_small.launch.py
# Open another terminal and run
source install/setup.bash
ros2 bag play /path/to/hesai_livox_ap20_converted.mcap
```
### Docker
With the docker image built (see docker build instructions), one can run the run the algorithm in a docker container by following these steps.
Allow the docker user to generate graphics:
```bash
xhost +local:docker
```
Replacing `/path/to/data` with the location of the datasets, run the container (with mounted source code for development):
```bash
docker run -it -e DISPLAY=$DISPLAY \
-v .:/root/ros2_ws/src/RESPLE \
-v /tmp/.X11-unix/:/tmp/.X11-unix/ \
-v ~/data/resple_dataset/:/root/data/resple_dataset \
-v ~/data/grand_tour_box/datasets:/root/data/grand_tour_box/datasets \
--name resple resple
```
Note: To recompile inside the docker container run `colcon build --packages-up-to resple`. If no development is intended, then one can omit `-v .:/root/ros2_ws/src/RESPLE`.
Replacing `<filename>` with the launch file from above, launch with:
```bash
ros2 launch resple <filename>.launch.py
```
Create a second terminal attached to the container with:
```bash
docker exec -it resple bash
```
In this second container, replacing `<example>/<filename>` to make a valid bag filepath, play the dataset:
```bash
ros2 bag play ~/data/resple_dataset/<example>/
```
If the container is already run, then:
* It can be removed with:
```bash
docker rm resple
```
* It can be started with:
```bash
docker start resple
```
* It can be be attached to with:
```bash
docker attach resple
```
* It can be stopped with:
```bash
docker stop resple
```
## Contributors
Ziyu Cao (Email: ziyu.cao@liu.se)
William Talbot (Email: wtalbot@ethz.ch)
Kailai Li (Email: kailai.li@rug.nl)
## Credits
Thanks for [SFUISE](https://github.com/ASIG-X/SFUISE), [ikd-Tree](https://github.com/hku-mars/ikd-Tree), [FAST-LIO](https://github.com/hku-mars/FAST_LIO), [Livox-SDK](https://github.com/Livox-SDK), and [basalt](https://gitlab.com/VladyslavUsenko/basalt).
## License
The source code is released under [GPLv3](https://www.gnu.org/licenses/) license.
|
https://github.com/jefferythewind/warpgbm
|
warpgbm
WarpGBM: High-Speed Gradient Boosting
Languages: Python (74.9%), Cuda (21.7%), C++ (3.4%)
.github/workflows
.github/workflows
examples
examples
tests
tests
warpgbm
warpgbm
...
.gitignore
.gitignore
LICENSE
LICENSE
MANIFEST.in
MANIFEST.in
README.md
README.md
pyproject.toml
pyproject.toml
> README.md

# WarpGBM
WarpGBM is a high-performance, GPU-accelerated Gradient Boosted Decision Tree (GBDT) library built with PyTorch and CUDA. It offers blazing-fast histogram-based training and efficient prediction, with compatibility for research and production workflows.
**New in v1.0.0:** WarpGBM introduces *Invariant Gradient Boosting* — a powerful approach to learning signals that remain stable across shifting environments (e.g., time, regimes, or datasets). Powered by a novel algorithm called **[Directional Era-Splitting (DES)](https://arxiv.org/abs/2309.14496)**, WarpGBM doesn't just train faster than other leading GBDT libraries — it trains smarter.
If your data evolves over time, WarpGBM is the only GBDT library designed to *adapt and generalize*.
---
## Contents
- [Features](#features)
- [Benchmarks](#benchmarks)
- [Installation](#installation)
- [Learning Invariant Signals Across Environments](#learning-invariant-signals-across-environments)
- [Why This Matters](#why-this-matters)
- [Visual Intuition](#visual-intuition)
- [Key References](#key-references)
- [Examples](#examples)
- [Quick Comparison with LightGBM CPU version](#quick-comparison-with-lightgbm-cpu-version)
- [Pre-binned Data Example (Numerai)](#pre-binned-data-example-numerai)
- [Documentation](#documentation)
- [Acknowledgements](#acknowledgements)
- [Version Notes](#version-notes)
## Features
- **Blazing-fast GPU training** with custom CUDA kernels for binning, histogram building, split finding, and prediction
- **Invariant signal learning** via [Directional Era-Splitting (DES)](https://arxiv.org/abs/2309.14496) — designed for datasets with shifting environments (e.g., time, regimes, experimental settings)
- Drop-in **scikit-learn style interface** for easy adoption
- Supports **pre-binned data** or **automatic quantile binning**
- Works with `float32` or `int8` inputs
- Built-in **validation and early stopping** support with MSE, RMSLE, or correlation metrics
- Simple install with `pip`, no custom drivers required
> 💡 **Note:** WarpGBM v1.0.0 is a *generalization* of the traditional GBDT algorithm.
> To run standard GBM training at maximum speed, simply omit the `era_id` argument — WarpGBM will behave like a traditional booster but with industry-leading performance.
---
## Benchmarks
### Scikit-Learn Synthetic Data: 1 Million Rows and 1,000 Features
In this benchmark we compare the speed and in-sample correlation of **WarpGBM v1.0.0** against LightGBM, XGBoost and CatBoost, all with their GPU-enabled versions. This benchmark runs on Google Colab with the L4 GPU environment.
```
WarpGBM: corr = 0.8882, train = 17.4s, infer = 3.2s
XGBoost: corr = 0.8877, train = 33.2s, infer = 8.0s
LightGBM: corr = 0.8604, train = 29.8s, infer = 1.6s
CatBoost: corr = 0.8935, train = 392.1s, infer = 379.2s
```
Colab Notebook: https://colab.research.google.com/drive/16U1kbYlD5HibGbnF5NGsjChZ1p1IA2pK?usp=sharing
---
## Installation
### Recommended (GitHub, always latest):
```bash
pip install git+https://github.com/jefferythewind/warpgbm.git
```
This installs the latest version directly from GitHub and compiles CUDA extensions on your machine using your **local PyTorch and CUDA setup**. It's the most reliable method for ensuring compatibility and staying up to date with the latest features.
### Alternatively (PyPI, stable releases):
```bash
pip install warpgbm
```
This installs from PyPI and also compiles CUDA code locally during installation. This method works well **if your environment already has PyTorch with GPU support** installed and configured.
> **Tip:**\
> If you encounter an error related to mismatched or missing CUDA versions, try installing with the following flag. This is currently required in the Colab environments.
>
> ```bash
> pip install warpgbm --no-build-isolation
> ```
### Windows
Thank you, ShatteredX, for providing working instructions for a Windows installation.
```
git clone https://github.com/jefferythewind/warpgbm.git
cd warpgbm
python setup.py bdist_wheel
pip install .\dist\warpgbm-0.1.15-cp310-cp310-win_amd64.whl
```
Before either method, make sure you’ve installed PyTorch with GPU support:\
[https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
---
## Learning Invariant Signals Across Environments
Most supervised learning models rely on an assumption known as the **Empirical Risk Minimization (ERM)** principle. Under ERM, the data distribution connecting inputs \( X \) and targets \( Y \) is assumed to be **fixed** and **stationary** across training, validation, and test splits. That is:
> The patterns you learn from the training set are expected to generalize out-of-sample — *as long as the test data follows the same distribution as the training data.*
However, this assumption is often violated in real-world settings. Data frequently shifts across time, geography, experimental conditions, or other hidden factors. This phenomenon is known as **distribution shift**, and it leads to models that perform well in-sample but fail catastrophically out-of-sample.
This challenge motivates the field of **Out-of-Distribution (OOD) Generalization**, which assumes your data is drawn from **distinct environments or eras** — e.g., time periods, customer segments, experimental trials. Some signals may appear predictive within specific environments but vanish or reverse in others. These are called **spurious signals**. On the other hand, signals that remain consistently predictive across all environments are called **invariant signals**.
WarpGBM v1.0.0 introduces **Directional Era-Splitting (DES)**, a new algorithm designed to identify and learn from invariant signals — ignoring signals that fail to generalize across environments.
---
### Why This Matters
- Standard models trained via ERM can learn to exploit **spurious correlations** that only hold in some parts of the data.
- DES explicitly tests whether a feature's split is **directionally consistent** across all eras — only such *invariant splits* are kept.
- This approach has been shown to reduce overfitting and improve out-of-sample generalization, particularly in financial and scientific datasets.
---
### Visual Intuition
We contrast two views of the data:
- **ERM Setting**: All data is assumed to come from the same source (single distribution).\
No awareness of environments — spurious signals can dominate.
- **OOD Setting (Era-Splitting)**: Data is explicitly grouped by environment (era).\
The model checks whether a signal holds across all groups — enforcing **robustness**.
<img src="https://github.com/user-attachments/assets/2be11ef3-6f2e-4636-ab91-307a73add247" alt="ChatGPT Image May 28, 2025, 05_05_09 PM" width="320"/>
---
### Key References
- **Invariant Risk Minimization (IRM)**: [Arjovsky et al., 2019](https://arxiv.org/abs/1907.02893)
- **Learning Explanations That Are Hard to Vary**: [Parascandolo et al., 2020](https://arxiv.org/abs/2009.00329)
- **Era Splitting: Invariant Learning for Decision Trees**: [DeLise, 2023](https://arxiv.org/abs/2309.14496)
---
WarpGBM is the **first open-source GBDT framework to integrate this OOD-aware approach natively**, using efficient CUDA kernels to evaluate per-era consistency during tree growth. It’s not just faster — it’s smarter.
---
## Examples
WarpGBM is easy to drop into any supervised learning workflow and comes with curated examples in the `examples/` folder.
- `Spiral Data.ipynb`: synthetic OOD benchmark from Learning Explanations That Are Hard to Vary
### Quick Comparison with LightGBM CPU version
```python
import numpy as np
from sklearn.datasets import make_regression
from time import time
import lightgbm as lgb
from warpgbm import WarpGBM
# Create synthetic regression dataset
X, y = make_regression(n_samples=100_000, n_features=500, noise=0.1, random_state=42)
X = X.astype(np.float32)
y = y.astype(np.float32)
# Train LightGBM
start = time()
lgb_model = lgb.LGBMRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, max_bin=7)
lgb_model.fit(X, y)
lgb_time = time() - start
lgb_preds = lgb_model.predict(X)
# Train WarpGBM
start = time()
wgbm_model = WarpGBM(max_depth=5, n_estimators=100, learning_rate=0.01, num_bins=7)
wgbm_model.fit(X, y)
wgbm_time = time() - start
wgbm_preds = wgbm_model.predict(X)
# Results
print(f"LightGBM: corr = {np.corrcoef(lgb_preds, y)[0,1]:.4f}, time = {lgb_time:.2f}s")
print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, y)[0,1]:.4f}, time = {wgbm_time:.2f}s")
```
**Results (Ryzen 9 CPU, NVIDIA 3090 GPU):**
```
LightGBM: corr = 0.8742, time = 37.33s
WarpGBM: corr = 0.8621, time = 5.40s
```
---
### Pre-binned Data Example (Numerai)
WarpGBM can save additional training time if your dataset is already pre-binned. The Numerai tournament data is a great example:
```python
import pandas as pd
from numerapi import NumerAPI
from time import time
import lightgbm as lgb
from warpgbm import WarpGBM
import numpy as np
napi = NumerAPI()
napi.download_dataset('v5.0/train.parquet', 'train.parquet')
train = pd.read_parquet('train.parquet')
feature_set = [f for f in train.columns if 'feature' in f]
target = 'target_cyrus'
X_np = train[feature_set].astype('int8').values
Y_np = train[target].values
# LightGBM
start = time()
lgb_model = lgb.LGBMRegressor(max_depth=5, n_estimators=100, learning_rate=0.01, max_bin=7)
lgb_model.fit(X_np, Y_np)
lgb_time = time() - start
lgb_preds = lgb_model.predict(X_np)
# WarpGBM
start = time()
wgbm_model = WarpGBM(max_depth=5, n_estimators=100, learning_rate=0.01, num_bins=7)
wgbm_model.fit(X_np, Y_np)
wgbm_time = time() - start
wgbm_preds = wgbm_model.predict(X_np)
# Results
print(f"LightGBM: corr = {np.corrcoef(lgb_preds, Y_np)[0,1]:.4f}, time = {lgb_time:.2f}s")
print(f"WarpGBM: corr = {np.corrcoef(wgbm_preds, Y_np)[0,1]:.4f}, time = {wgbm_time:.2f}s")
```
**Results (Google Colab Pro, A100 GPU):**
```
LightGBM: corr = 0.0703, time = 643.88s
WarpGBM: corr = 0.0660, time = 49.16s
```
---
## Documentation
### `WarpGBM` Parameters:
- `num_bins`: Number of histogram bins to use (default: 10)
- `max_depth`: Maximum depth of trees (default: 3)
- `learning_rate`: Shrinkage rate applied to leaf outputs (default: 0.1)
- `n_estimators`: Number of boosting iterations (default: 100)
- `min_child_weight`: Minimum sum of instance weight needed in a child (default: 20)
- `min_split_gain`: Minimum loss reduction required to make a further partition (default: 0.0)
- `histogram_computer`: Choice of histogram kernel (`'hist1'`, `'hist2'`, `'hist3'`) (default: `'hist3'`)
- `threads_per_block`: CUDA threads per block (default: 32)
- `rows_per_thread`: Number of training rows processed per thread (default: 4)
- `L2_reg`: L2 regularizer (default: 1e-6)
- `colsample_bytree`: Proportion of features to subsample to grow each tree (default: 1)
### Methods:
```
.fit(
X, # numpy array (float or int) 2 dimensions (num_samples, num_features)
y, # numpy array (float or int) 1 dimension (num_samples)
era_id=None, # numpy array (int) 1 dimension (num_samples)
X_eval=None, # numpy array (float or int) 2 dimensions (eval_num_samples, num_features)
y_eval=None, # numpy array (float or int) 1 dimension (eval_num_samples)
eval_every_n_trees=None, # const (int) >= 1
early_stopping_rounds=None, # const (int) >= 1
eval_metric='mse' # string, one of 'mse', 'rmsle' or 'corr'. For corr, loss is 1 - correlation(y_true, preds)
)
```
Train with optional validation set and early stopping.
```
.predict(
X # numpy array (float or int) 2 dimensions (predict_num_samples, num_features)
)
```
Predict on new data, using parallelized CUDA kernel.
---
## Acknowledgements
WarpGBM builds on the shoulders of PyTorch, scikit-learn, LightGBM, and the CUDA ecosystem. Thanks to all contributors in the GBDT research and engineering space.
---
## Version Notes
### v0.1.21
- Vectorized predict function replaced with CUDA kernel (`warpgbm/cuda/predict.cu`), parallelizing per sample, per tree.
### v0.1.23
- Adjust gain in split kernel and added support for an eval set with early stopping based on MSE.
### v0.1.25
- Added `colsample_bytree` parameter and new test using Numerai data.
### v0.1.26
- Fix Memory bugs in prediction and colsample bytree logic. Added "corr" eval metric.
### v1.0.0
- Introduce invariant learning via directional era splitting (DES). Also streamline VRAM improvements over previous sub versions.
|
https://github.com/ga2mer/MarathonRecomp
|
MarathonRecomp
An unofficial PC port of the Xbox 360 version of Sonic the Hedgehog (2006) created through the process of static recompilation
Languages: C++ (92.5%), CMake (3.4%), HLSL (1.9%), Metal (1.7%)
.github
.github
MarathonRecomp
MarathonRecomp
MarathonRecompLib
MarathonRecompLib
docs
docs
flatpak
flatpak
...
.editorconfig
.editorconfig
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
CMakePresets.json
CMakePresets.json
> README.md
<p align="center">
<img src="https://raw.githubusercontent.com/IsaacMarovitz/MarathonRecompResources/refs/heads/main/images/logo/Logo.png" width="512"/>
</p>
---
> [!CAUTION]
> This recompilation is still under active development and is NOT meant for public use. Support will not be provided until an official release.
Marathon Recompiled is an unofficial PC port of the Xbox 360 version of Sonic the Hedgehog (2006) created through the process of static recompilation. The port offers Windows, Linux, and macOS support.
**This project does not include any game assets. You must provide the files from your own legally acquired copy of the game to install or build Marathon Recompiled.**
[XenonRecomp](https://github.com/sonicnext-dev/XenonRecomp) and [XenosRecomp](https://github.com/sonicnext-dev/XenosRecomp) are the main recompilers used for converting the game's original PowerPC code and Xenos shaders into compatible C++ and HLSL code respectively. The development of these recompilers was directly inspired by [N64: Recompiled](https://github.com/N64Recomp/N64Recomp), which was used to create [Zelda 64: Recompiled](https://github.com/Zelda64Recomp/Zelda64Recomp).
## Table of Contents
- [Known Issues](#known-issues)
- [FAQ](#faq)
- [Building](#building)
- [Credits](#credits)
## Known Issues
Before reporting any issues, check if they are listed [here](https://github.com/sonicnext-dev/MarathonRecomp/issues).
### Original Game Bugs
Game bugs present on the original hardware are intentionally preserved and will not be fixed apart from a few minor exceptions in [#44](https://github.com/sonicnext-dev/MarathonRecomp/issues/44). Please do not report issues for these bugs and verify that the issue does not occur on original hardware before reporting. Bug reports for issues found in the original game will be rejected. Bugs that only happen in Marathon Recompiled must be accompanied by footage captured on original Xbox 360 hardware showing that the bug does not happen there.
### File Picker Unavailable on Steam Deck in Game Mode
Due to some restrictions of how the desktop environment on the Steam Deck works whilst in Game Mode, please note that you may need to at least first boot into Desktop Mode to be able to use the file picker to provide the game files.
Simply booting at least once in Desktop Mode will enable the Deck to use the file picker when going back to Game Mode. You can complete the entire installation process while in Desktop Mode to save yourself the trouble of browsing through Game Mode if necessary.
## FAQ
### Do you have a website?
Marathon Recompiled does not have an official website.
**Please link here when directing anyone to the project.**
> [!CAUTION]
> Do not download builds of Marathon Recompiled from anywhere but our [Releases](https://github.com/sonicnext-dev/MarathonRecomp/releases/latest) page.
>
> **We will never distribute builds on other websites, via Discord servers or via third-party update tools.**
### Why does the installer say my files are invalid?
The installer may display this error for several reasons. Please check the following to ensure your files are valid:
- Please read the [How to Install](#how-to-install) section and make sure you've acquired all of the necessary files correctly.
- Verify that you're not trying to add compressed files such as `.zip`, `.7z`, `.rar` or other formats.
- Only use the **Add Folder** option if you're sure you have a directory with the content's files already extracted, which means it'll only contain files like `.xex`, `.ar.00`, `.arl` and others. **This option will not scan your folder for compatible content**.
- Ensure that the files you've acquired correspond to the same region. **Discs and Title Updates from different regions can't be used together** and will fail to generate a patch.
- The installer will only accept **original and unmodified files**. Do not attempt to provide modified files to the installer.
### What are the keyboard bindings?
Pad|Key
-|-
A (Cross)|S
B (Circle)|D
X (Square)|A
Y (Triangle)|W
D-Pad - Up|Unbound
D-Pad - Down|Unbound
D-Pad - Left|Unbound
D-Pad - Right|Unbound
Start|Return
Back (Select)|Backspace
Left Trigger (L2)|1
Right Trigger (R2)|3
Left Bumper (L1)|Q
Right Bumper (R1)|E
Left Stick - Up|Up Arrow
Left Stick - Down|Down Arrow
Left Stick - Left|Left Arrow
Left Stick - Right|Right Arrow
Right Stick - Up|Unbound
Right Stick - Down|Unbound
Right Stick - Left|Unbound
Right Stick - Right|Unbound
---
You can change the keyboard bindings by editing `config.toml` located in the [configuration directory](#where-is-the-save-data-and-configuration-file-stored), although using a controller is highly recommended until Action Remapping is added in a future update.
Refer to the left column of [this enum template](https://github.com/sonicnext-dev/MarathonRecomp/blob/main/MarathonRecomp/user/config.cpp#L40) for a list of valid keys.
*The default keyboard layout is based on Devil's Details' keyboard layout for Sonic Generations (2011)*.
### Where is the save data and configuration file stored?
The save data and configuration files are stored at the following locations:
- Windows: `%APPDATA%\MarathonRecomp\`
- Linux: `~/.config/MarathonRecomp/`
You will find the save data under the `save` folder. The configuration file is named `config.toml`.
### I want to update the game. How can I avoid losing my save data? Do I need to reinstall the game?
Updating the game can be done by simply copying and replacing the files from a [release](https://github.com/sonicnext-dev/MarathonRecomp/releases) on top of your existing installation. **Your save data and configuration will not be lost.** You won't need to reinstall the game, as the game files will always remain the same across versions of Marathon Recompiled.
### How can I force the game to store the save data and configuration in the installation folder?
You can make the game ignore the [default configuration paths](#where-is-the-save-data-and-configuration-file-stored) and force it to save everything in the installation directory by creating an empty `portable.txt` file. You are directly responsible for the safekeeping of your save data and configuration if you choose this option.
### How can I force the game to run the installation again?
While it's unlikely you'll need to do this unless you've modified your game files by accident, you can force the installer to run again by using the launch argument: `--install`.
### How can I force the game to run under X11 or Wayland?
Use either of the following arguments to force SDL to run under the video driver you want:
- X11: `--sdl-video-driver x11`
- Wayland: `--sdl-video-driver wayland`
The second argument will be passed directly to SDL as a hint to try to initialize the game with your preferred option.
### Where is the game data for the Flatpak version installed?
Given it is not possible to run the game where the Flatpak is stored, the game data will be installed to `~/.var/app/io.github.sonicnext_dev.marathonrecomp/data`. The Flatpak build will only recognize this directory as valid. Feel free to reuse this data directory with a native Linux build if you wish to switch in the future.
If you wish to move this data to another location, you can do so by creating a symlink from this directory to the one where you'll migrate your installation to.
> [!WARNING]
> Using external frame rate limiters or performance overlays may degrade performance or have negative consequences.
### Can I install the game with a PlayStation 3 copy?
**You cannot use the files from the PlayStation 3 version of the game.** Supporting these files would require an entirely new recompilation, as they have proprietary formatting that only works on PS3 and the code for these formats is only present in that version. All significant differences present in the PS3 version of the game have been included in this project as options.
### Why is the game detecting my PlayStation controller as an Xbox controller?
If you're using a third-party input translation layer (such as DS4Windows or Steam Input), it is recommended that you disable these for full controller support.
### What other platforms will be supported?
This project does not plan to support any more platforms other than Windows, Linux and macOS at the moment. Any contributors who wish to support more platforms should do so through a fork.
## Building
[Check out the building instructions here](/docs/BUILDING.md).
## Credits
### Marathon Recompiled
- [ga2mer](https://github.com/ga2mer): Creator and Lead Developer of the recompilation.
- [Rei-san](https://github.com/ReimousTH): Game Internals Researcher and Patch Developer.
- [squidbus](https://github.com/squidbus): Graphics Developer.
- [IsaacMarovitz](https://github.com/IsaacMarovitz): Graphics & Installer Developer.
- [Hyper](https://github.com/hyperbx): Custom menus and Game Internals Researcher.
- [LJSTAR](https://github.com/LJSTARbird): Artist behind the project logo.
- [Skyth](https://github.com/blueskythlikesclouds): Lead Developer of Unleashed Recompiled and endlessly helpful resource.
- [Darío](https://github.com/DarioSamo): Maintainer of [Plume](https://github.com/renderbag/plume) & Graphics Developer.
- [Hotline Sehwani](https://www.youtube.com/watch?v=8mfOSTcTQNs): Artist behind installer music.
- [Syko](https://x.com/UltraSyko): Helped in identified fonts used in original SonicNext logo.
### Unleashed Recompiled
- [Skyth](https://github.com/blueskythlikesclouds)
- [Sajid](https://github.com/Sajidur78)
- [Hyper](https://github.com/hyperbx)
- [Darío](https://github.com/DarioSamo)
- [ĐeäTh](https://github.com/DeaTh-G)
- [RadiantDerg](https://github.com/RadiantDerg)
- [PTKay](https://github.com/PTKay)
- [SuperSonic16](https://github.com/thesupersonic16)
- [NextinHKRY](https://github.com/NextinMono)
- [LadyLunanova](https://linktr.ee/ladylunanova)
- [LJSTAR](https://github.com/LJSTARbird)
- [saguinee](https://twitter.com/saguinee)
- [Goalringmod27](https://linktr.ee/goalringmod27)
- [M&M](https://github.com/ActualMandM)
- [DaGuAr](https://twitter.com/TheDaguar)
- [brianuuuSonic](https://github.com/brianuuu)
- [Kitzuku](https://github.com/Kitzuku)
|
https://github.com/Vector35/scc
|
scc
Languages: C (51.9%), C++ (32.4%), Roff (11.2%), M4 (1.7%), Yacc (0.9%), HTML (0.5%)
buildenv/msys
buildenv/msys
codegen
codegen
docs
docs
runtime
runtime
tests
tests
...
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
AArch64.cgen
AArch64.cgen
Arm.cgen
Arm.cgen
> README.md
# Shellcode Compiler
The Shellcode Compiler started its life as an internal CTF tool before it was re-purposed to be the compiler integrated into Binary Ninja.
With the 5.0 release of [Binary Ninja](https://binary.ninja/), this repository was open-sourced. In the future, it's likely that SCC may be migrated into the main [binaryninja-api](https://github.com/Vector35/binaryninja-api/) repository.
Long-term our plan is to replace scc with a version of LLVM using the appropriate compiler flags for minimal shellcode-style codegen. (We're already embedding multiple copies of LLVM -- one for the type parse and one for the debugger, so this need not be as much of a burden as it might sound.)
Note that scc is not being actively maintained, however pull-requests and [issues](https://github.com/Vector35/binaryninja-api/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22Component%3A%20SCC%22) are welcome.
## Documentation
Online documentation is available at: [https://scc.binary.ninja/](https://scc.binary.ninja/)
## Usage and Build Instructions
The build system uses cmake:
```
$ git clone --recursive https://github.com/vector35/scc
$ cd scc
$ cmake -S . -B build
...
$ cmake --build build
```
## Licensing
Some components may be released under compatible but slightly different open source licenses and should have their own LICENSE file as appropriate.
Remaining components are released under an [MIT](https://github.com/Vector35/scc/blob/dev/LICENSE.txt) license.
|
https://github.com/bvanjoi/bolt-ts
|
bolt-ts
A TypeScript Compiler Implemented in Rust
Languages: Rust (77.3%), TypeScript (19.7%), JavaScript (3.0%)
.github/workflows
.github/workflows
.vscode-template
.vscode-template
crates
crates
helper
helper
tests/cases/compiler
tests/cases/compiler
...
.gitignore
.gitignore
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
README.md
README.md
rust-toolchain.toml
rust-toolchain.toml
> README.md
# bolt-ts
bolt-ts is a TypeScript compiler implemented in Rust. The current implementation heavily leverages code ported from the original TypeScript compiler(tsc).
## Performance
When testing a subset of `type-fest` functionality, bolt-ts demonstrates:
- 2.5× faster than ts-go
- 5× faster than tsc
(Benchmarked on Apple M3 Max with 36GB RAM. See [typescript-compiler-bench](https://github.com/bvanjoi/typescript-compiler-bench) for details)
## Current Status
Core functionalities are operational but require refinement. Key pending improvements include:
- Parser: async function, switch/with stmt.
- Module Resolution: cache, `exports`/`imports` field support, `node_modules/@types` type definition resolution.
- Type Checking: enum implementation and various edge-case bugs.
- Output Generation: sourcemap generation, different module systems.
- And others: js file processing, language service..
|
https://github.com/NVIDIA-RTX/RTXNS
|
RTXNS
NVIDIA Neural Shading SDK
Languages: C++ (60.2%), Slang (30.8%), CMake (9.0%)
assets/data
assets/data
docs
docs
external
external
samples
samples
src
src
...
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
CHANGELOG.md
CHANGELOG.md
CMakeLists.txt
CMakeLists.txt
> README.md
# RTX Neural Shading
RTX Neural Shading (RTXNS) also known as RTX Neural Shaders, is intended as a starting point for developers interested in bringing Machine Learning (ML) to their graphics applications. It provides a number of examples to help the reader understand how to train their own neural networks and then use those models to perform inference alongside their normal graphics rendering.
RTXNS uses the [Slang](https://shader-slang.com) shading language and it utilizes either the DirectX Preview Agility SDK or the Vulkan Cooperative Vectors extension to provide access to the GPUs ML acceleration.
A number of examples are included which build upon each other from a simple inference example to more complex examples showing how to train a neural network to represent a shader or a texture. Helper functions to facilitate building your own neural networks are also included.
Alongside the core samples is a SlangPy sample to demonstrate how to use python and SlangPy for fast iteration and development of neural networks which can then be integrated into RTXNS for inference.
When exploring RTXNS, it is assumed that the reader is already familiar with ML and neural networks.
## Requirements
### General
[CMake v3.24.3][CMake] **|** [VS 2022][VS22] **|** [Slang v2025.10](https://shader-slang.com/tools/)
### DirectX
[DirectX Preview Agility SDK 1.717.0-preview](https://www.nuget.org/packages/Microsoft.Direct3D.D3D12/1.717.0-preview) **|** [Microsoft DXC 1.8.2505.28](https://www.nuget.org/packages/Microsoft.Direct3D.DXC/1.8.2505.28) **|** [Shader Model 6-9-Preview Driver](https://developer.nvidia.com/downloads/shadermodel6-9-preview-driver)
### Vulkan
GPU must support the Vulkan `VK_NV_cooperative_vector` extension (minimum NVIDIA RTX 20XX) **|** [Vulkan SDK 1.3.296.0](https://vulkan.lunarg.com/sdk/home) **|** Public Driver ≥ 572.16
## Known Issues
05/30/2025: When updating from v1.0.0 to v1.1.0 is it recommended to delete the cmake cache to avoid build errors.
## Project structure
| Directory | Details |
| --------------------------------- | -------------------------------------- |
| [/assets](assets) | _Asset files for samples_ |
| [/docs](docs) | _Documentation for showcased tech_ |
| [/samples](samples) | _Samples showcasing usage of MLPs_ |
| [/external/donut](external/donut) | _Framework used for the examples_ |
| [/external](external) | _Helper dependencies for the examples_ |
| [/src](src) | _Helper and utility functions_ |
## Getting started
- [Quick start guide](docs/QuickStart.md) for building and running the neural shading samples.
- [Library usage guide](docs/LibraryGuide.md) for using helper functions
### External Resources
This project uses [Slang](https://shader-slang.com) and the Vulkan CoopVector extensions. The following links provide more detail on these, and other technologies which may help the reader to better understand the relevant technologies, or just to provide further reading.
* [Slang User Guide](https://shader-slang.com/slang/user-guide/)
* [Automatic Differentiation](https://shader-slang.com/slang/user-guide/autodiff.html)
* [SlangPy](https://slangpy.readthedocs.io/en/latest/)
* [Vulkan `VK_NV_cooperative_vector` extension](https://registry.khronos.org/vulkan/specs/latest/man/html/VK_NV_cooperative_vector.html)
* [Donut](https://github.com/NVIDIAGameWorks/donut)
## Contact
RTXNS is actively being developed. Please report any issues directly through the GitHub issue tracker, and for any information or suggestions contact us at rtxns-sdk-support@nvidia.com
## Citation
Use the following BibTex entry to cite the usage of RTXNS in published research:
```bibtex
@online{RTXNS,
title = {{{NVIDIA}}\textregistered{} {RTXNS}},
author = {{NVIDIA}},
year = 2025,
url = {https://github.com/NVIDIA-RTX/RTXNS},
urldate = {2025-02-03},
}
```
## License
See [LICENSE.md](LICENSE.MD)
[VS22]: https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&channel=Release&version=VS2022&source=VSLandingPage&passive=false&cid=2030
[CMake]: https://github.com/Kitware/CMake/releases/download/v3.24.3/cmake-3.24.3-windows-x86_64.msi
|
https://github.com/SamoZ256/hydra
|
hydra
A Nintendo Switch emulator for macOS
Languages: C++ (83.8%), C (12.1%), Swift (1.9%), CMake (1.7%)
.github/workflows
.github/workflows
externals
externals
img
img
res/nx-hbloader
res/nx-hbloader
src
src
...
.clang-format
.clang-format
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
LICENSE.txt
LICENSE.txt
> README.md
# Hydra
Hydra is an experimental Nintendo Switch emulator for macOS.
## Status
The emulator is still in very early stages. A few homebrew apps work perfectly, and some official games get in-game with various degrees of playability.

Only the NRO, NSO and NCA formats are supported. You can extract an NSP file into NCA with [this tool](https://github.com/SamoZ256/switch-extract-macos).
In order to run official games, you will need to download a set of patches to prevent crashes. You can get the patches together with a guide on how to install them [here](https://github.com/SamoZ256/hydra-patches).
## Usage
### Dependencies
You can install Hydra dependencies with a package manager of your choice, like `brew`.
```sh
brew install cmake ninja sdl3 fmt
```
### Building
First, clone the repository and update submodules.
```sh
git clone https://github.com/SamoZ256/hydra.git
cd hydra
git submodule update --init --recursive
```
Now configure CMake and build with Ninja.
```sh
cmake . -B build -G Ninja -DMACOS_BUNDLE=ON
ninja -C build
```
If you want to use the SwiftUI frontend instead of SDL3, you can use the `-DFRONTEND=SwiftUI` option.
### Running
If you built a macOS bundle, you will find a macOS app at `build/bin/Hydra.app`. Otherwise, you can run the emulator with the following command:
```sh
build/bin/hydra
```
For SDL3, you can drag and drop a ROM into the window or provide path to the ROM as an argument when launching the emulator.
### Configuring
You can find a config file at `/Users/USER/Library/Application Support/Hydra/config.toml` after launching the emulator at least once.
|
https://github.com/pRain1337/plouton
|
plouton
System Management Mode (SMM) game cheating framework
Languages: C (89.6%), DenizenScript (10.1%)
Plouton-UEFI
Plouton-UEFI
images
images
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE.md
LICENSE.md
README.md
README.md
> README.md
# Plouton - a System Management Mode (SMM) cheat framework
<p align="center">
<img src="/images/logo_plouton.jpg" alt="Picture of Plouton" width="600">
</p>
*Plouton was the master of the underworld, and thus he was able to command the riches of the earth and the souls of the dead.*
Plouton is a System Management Mode (SMM) (ring-2, *"underworld"*) PC game cheat framework.
This repository and code were created as proof-of-concept and released as open source, we do not take any responsibility for the further usage of this project.
Check out this [video demonstration](https://www.youtube.com/watch?v=HoLtvFKOZzY) of Plouton's CS2 cheat implementation.
# Supported platforms
Plouton supports only Intel-based systems, for AMD-based systems some importants features (e.g. XHCI generating SMIs on USB Events) are not available). The core functionality and memory code would still be applicable, and could be reused.
The core has been tested on Intel generations from Series 200 (Skylake, Kaby Lake) up to Series 700 (Alder Lake, Raptor Lake).
According to the offsets in the Intel Chipset datasheet for the Series 800, it should also be supported but has not been tested.
# Building
See [Plouton-UEFI](Plouton-UEFI)
# Extending
To extend Plouton to support your game of choice, see [targets](Plouton-UEFI/Plouton/target/)
To extend Plouton to support your hardware (mouse, audio devices), see [hardware](Plouton-UEFI/Plouton/hardware/)
To extend for other OS than Windows, sorry, this is not currently possible :-)
Contributions are welcome!
|
https://github.com/Lagrange-Labs/deep-prove
|
deep-prove
Framework to prove inference of ML models blazingly fast
Languages: Rust (94.6%), Python (5.2%)
.github
.github
deep-prove
deep-prove
docker
docker
docs
docs
ff_ext
ff_ext
...
.envrc
.envrc
.gitignore
.gitignore
.gitmodules
.gitmodules
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
> README.md
# 🚀 DeepProve: Zero-Knowledge Machine Learning (zkml) Inference
Welcome to **DeepProve**, a cutting-edge framework designed to prove neural network inference using zero-knowledge cryptographic techniques. Whether you're working with Multi-Layer Perceptrons (MLPs) or Convolutional Neural Networks (CNNs), DeepProve offers a fast and efficient way to verify computations without revealing the underlying data.
zkml is the name of the subcrate implementing the proving logic.
## 🤔 What Does DeepProve Do?
DeepProve leverages advanced cryptographic methods like sumchecks and logup GKR to achieve sublinear proving times. This means you can prove the correctness of your model's inference faster than ever before!
### 📊 Benchmark Highlights
CNN 264k: This runs a CNN on the cifar 10 dataset for a total of 264k parameters. DeepProve is proving 158x faster at this size!
Dense 4M: This runs a multiple dense layers for a total of 4 million parameters. DeepProve is proving 54x faster at this size!
| Model Type | ZKML Proving Time (ms) | ZKML Verification Time (ms) | EZKL Proving Time (ms) | EZKL Verification Time (ms) |
|------------|------------------------|-----------------------------|------------------------|-----------------------------|
| CNN 264k | 1242 | 599 | 196567.01 | 312505 |
| Dense 4M | 2335 | 520 | 126831.3 | 1112 |
## 📜 Licensing
- **zkml folder**: Licensed under the [Lagrange License](https://github.com/Lagrange-Labs/deep-prove/blob/master/zkml/LICENSE), unless otherwise specified.
- **Rest of the Code**: Licensed under Apache 2.0 + MIT, as per the original repository.
## 🌟 Use Cases
Proving inference of AI models has a wide range of applications, especially in scenarios where privacy and trust are paramount. For instance, in healthcare, sensitive patient data can be used to make predictions without exposing the data itself. In finance, models can be verified for compliance without revealing proprietary algorithms. Additionally, in decentralized applications, zero-knowledge proofs can ensure the integrity of AI computations on the blockchain, fostering trust and transparency. These use cases highlight the transformative potential of ZKML in various industries.
## 🙏 Acknowledgements
This project builds upon the work from scroll-tech/ceno, reusing the sumcheck and GKR implementation from their codebase. Check out their work at [scroll-tech/ceno](https://github.com/scroll-tech/ceno).
For more technical details and usage instructions, dive into the [ZKML README](zkml/README.md).
Happy proving! 🎉
|
https://github.com/tpde2/tpde
|
tpde
A fast framework for writing baseline compiler back-ends in C++
Languages: LLVM (73.5%), C++ (24.7%), C (1.0%), CMake (0.4%), Python (0.4%), Shell (0.0%)
.github/workflows
.github/workflows
LICENSES
LICENSES
deps
deps
docs
docs
tpde-encodegen
tpde-encodegen
...
.clang-format
.clang-format
.gdbinit
.gdbinit
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
> README.md
# TPDE Compiler Back-End Framework
TPDE is a fast compiler back-end framework that adapts to existing SSA IRs.
The primary goal is low-latency compilation while maintaining reasonable (`-O0`) code quality, e.g., as baseline compiler for JIT compilation or unoptimized builds.
Currently, TPDE only targets ELF-based x86-64 and AArch64 (Armv8.1) platforms.
This repository contains:
- TPDE: the core compiler framework.
- TPDE-Encodegen: a utility for easing the use of TPDE by deriving code generators through LLVM's Machine IR.
- TPDE-LLVM: a standalone back-end for LLVM-IR, which compiles 10--20x faster than LLVM -O0 with similar code quality, usable as library (e.g., for JIT), as tool (`tpde-llc`), and integrated in Clang/Flang (with a patch).
For more information and getting started, consult the [documentation](https://docs.tpde.org/).
### Publications
- Tobias Schwarz, Tobias Kamm, and Alexis Engelke. TPDE: A Fast Adaptable Compiler Back-End Framework. [arXiv:2505.22610](https://arxiv.org/abs/2505.22610) [cs.PL]. 2025.
### License
Generally: Apache-2.0 WITH LLVM-exception. (Detailed license information is attached to every file. Dependencies may have different licenses.)
|
https://github.com/XunhaoLai/native-sparse-attention-triton
|
native-sparse-attention-triton
Efficient triton implementation of Native Sparse Attention.
Languages: Python (99.9%), Shell (0.1%)
native_sparse_attention
native_sparse_attention
test
test
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
install_dependency.sh
install_dependency.sh
setup.py
setup.py
> README.md
<div align="center">
# Native Sparse Attention Triton
</div>
This repository implements the sparse attention mechanism introduced in the paper [Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention](https://arxiv.org/abs/2502.11089) and provides an efficient training implementation based on [Triton](https://github.com/triton-lang/triton).
🎉 We now support both training and inference for Native Sparse Attention (variable-length version, including prefilling, decoding, and KV cache management). We have provided a toy model at `model.ToyNSALlama`, which supports `forward` function for training and `generate` function for inference. Welcome to try it out!
## Requirements
Ensure the following dependencies are installed:
- PyTorch >= 2.1.0
- triton >= 3.0.0
- einops >= 0.7.0
- flash_attn >= 2.6.3
## Usage
### Notes
1. PyTorch implementations (`ops.torch`) are intended for debugging only.
2. For production use, prefer Triton operators (`ops.triton`).
3. All implementations are based on the varlen approach similiar to flash_attn_func_varlen. Please concatenate the inputs of a batch before use.
4. Only support attention head dimension less than 128 for now.
### Install
You can install `native_sparse_attention` using pip:
```shell
pip install git+https://github.com/XunhaoLai/native-sparse-attention-triton.git
```
### Functions
The `ops` module has implemented several functions required for native sparse attention. For detailed usage instructions, please see [this link](https://github.com/XunhaoLai/native-sparse-attention-triton/tree/main/native_sparse_attention/ops#readme).
You can import those functions from the `ops` module:
```python
import torch
from native_sparse_attention.ops import linear_compress, compressed_attention, topk_sparse_attention
# input example
num_q_heads = 64
num_kv_heads = 4
head_dim = 128
kernel_size = 32
kernel_stride = 16
block_size = 64
topk = 16
cu_seqlens = torch.Tensor([0, 1024, 8192, 16384]).to(torch.int32).cuda()
query = torch.randn(16384, num_q_heads, head_dim).to(torch.bfloat16).cuda()
key = torch.randn(16384, num_kv_heads, head_dim).to(torch.bfloat16).cuda()
value = torch.randn(16384, num_kv_heads, head_dim).to(torch.bfloat16).cuda()
# weight example
w = (
torch.randn(num_kv_heads, kernel_size * head_dim, head_dim)
.to(torch.bfloat16)
.cuda()
)
pe = torch.randn(num_kv_heads, kernel_size, head_dim).to(torch.bfloat16).cuda()
# 1. key value compression
compressed_key, compressed_cu_seqlens = linear_compress(
key, w, cu_seqlens, kernel_size, kernel_stride, pe
)
compressed_value, _ = linear_compress(
value, w, cu_seqlens, kernel_size, kernel_stride, None
)
# 2. attention between query and compressed key value
compressed_attn_output, topk_idx = compressed_attention(
query,
compressed_key,
compressed_value,
kernel_size,
kernel_stride,
block_size,
topk,
cu_seqlens,
compressed_cu_seqlens,
init_blocks=1,
local_blocks=2,
)
# 3. topk sparse attention
sparse_attn_output = topk_sparse_attention(
query,
key,
value,
topk_idx,
block_size,
cu_seqlens,
)
```
### Module
The `modules` directory also provides implementations based on `torch.nn.module` for easy integration into models.
```python
from native_sparse_attention.modules import NativeSparseAttention, RopeConfig
NSA_Layer = NativeSparseAttention(
compress_type="linear",
hidden_size=4096,
num_q_heads=64,
num_kv_heads=4,
head_dim=128,
kernel_size=32,
kernel_stride=16,
block_size=64,
topk=8,
init_blocks=1,
local_blocks=2,
window_size=512,
rope_config=RopeConfig(
max_position_embeddings=32768,
head_dim=128,
rope_theta=500000,
rope_scaling={
"factor": 4.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3",
},
),
)
```
### Model
We offer two simplified LLaMA models in the `model` directory, featuring self-attention and native sparse attention. For more details on how to use these models, please refer to [this link](https://github.com/XunhaoLai/native-sparse-attention-triton/tree/main/native_sparse_attention/model#readme).
```python
from native_sparse_attention.model import ToyNSALlamaConfig, InferenceConfig, ToyNSALlama
config = ToyNSALlamaConfig(
hidden_size=4096,
intermediate_size=14336,
num_hidden_layers=8,
num_attention_heads=32,
num_key_value_heads=2,
head_dim=128,
rope_theta=500000.0,
rope_scaling={
"factor": 8.0,
"high_freq_factor": 4.0,
"low_freq_factor": 1.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3",
},
compress_type="weightedpool",
kernel_size=32,
kernel_stride=16,
block_size=64,
topk=8,
init_blocks=1,
local_blocks=2,
window_size=512,
)
inference_config = InferenceConfig(
max_batch_size=4,
max_length=8192,
max_new_tokens=128,
)
model = ToyNSALlama(config, inference_config).cuda().bfloat16()
```
## Testing
Some test scripts are available in the `test` folder and can be run directly for unit testing. For example:
```bash
python test/test_topk_sparse_attention.py
python test/test_nsa_module.py
python test/test_nsa_model.py
```
### Benchmarks
Here are the speed benchmarks conducted on a single NVIDIA A100 GPU or H100 GPU for the `topk_sparse_attention` function:
A100 GPU speed benchmarks:
```sh
** forward with block size 64 **:
N Flash Triton-Flash Triton-Top8 Triton-Top16
0 2048.0 0.414144 0.635648 0.633440 1.009184
1 4096.0 1.400304 2.267552 1.179808 1.916736
2 8192.0 5.223776 8.528160 2.266816 3.723168
3 16384.0 20.225697 32.745537 4.468128 7.359168
4 32768.0 79.587715 128.951065 8.517440 14.142848
5 65536.0 321.240479 511.652100 17.249599 30.991360
6 131072.0 1349.810425 2063.245605 36.400482 67.884544
** backward with block size 64 **:
N Flash Triton-Flash Triton-Top8 Triton-Top16
0 2048.0 1.315440 2.348560 1.941568 2.691040
1 4096.0 4.271584 8.553184 3.647744 5.032160
2 8192.0 15.323984 32.665440 5.650144 9.066112
3 16384.0 58.753281 127.675964 11.160832 17.113279
4 32768.0 227.770462 504.572693 21.723392 34.715614
5 65536.0 899.181274 2059.718506 44.517181 76.309441
6 131072.0 3587.918701 8530.726562 105.344734 182.970169
```
H100 GPU benchmarks:
```sh
** forward with block size 64 **:
N Flash Triton-Flash Triton-Top8 Triton-Top16
0 2048.0 0.259552 0.293888 0.584544 0.917664
1 4096.0 0.846848 1.029904 1.094976 1.745136
2 8192.0 3.043744 3.843392 2.128256 3.396880
3 16384.0 11.743568 14.791360 4.190528 6.704192
4 32768.0 45.968513 57.532478 7.614496 12.417440
5 65536.0 187.234375 228.093948 14.840048 24.511856
6 131072.0 810.890381 914.693970 29.470400 48.990192
** backward with block size 64 **:
N Flash Triton-Flash Triton-Top8 Triton-Top16
0 2048.0 0.798976 1.096096 1.117312 1.380016
1 4096.0 2.545680 3.826336 1.669760 2.214880
2 8192.0 9.029760 14.411633 2.772096 3.947456
3 16384.0 34.144016 58.945698 5.201344 7.538912
4 32768.0 135.718369 233.369247 9.968864 15.154192
5 65536.0 541.053894 929.337646 21.089870 33.818878
6 131072.0 2139.974854 3785.540527 54.918144 93.750717
```
Here comes another speed benchmark result for testing `compressed_attention` function on a single NVIDIA A100 GPU or H100 GPU:
A100 GPU speed benchmarks:
```sh
** forward with kernel 32 and stride 16 **:
N Flash Triton-Flash Compressed Compressed-wo-Score
0 2048.0 0.413664 0.635488 0.655024 0.170816
1 4096.0 1.396416 2.247648 1.132304 0.377152
2 8192.0 5.234656 8.526400 2.879200 0.977952
3 16384.0 19.988865 32.755199 9.426448 2.943024
4 32768.0 79.419907 128.955170 30.284096 9.901120
5 65536.0 321.590210 511.615509 112.260544 36.001602
6 131072.0 1346.996338 2069.837891 423.099518 136.820038
** backward with kernel 32 and stride 16 **:
N Flash Triton-Flash Compressed
0 2048.0 1.322560 2.352000 0.486784
1 4096.0 4.270832 8.552608 0.971392
2 8192.0 15.515680 32.671329 2.603744
3 16384.0 59.345055 128.377472 8.499456
4 32768.0 230.626144 506.581238 30.064833
5 65536.0 919.260498 2068.642578 113.466560
6 131072.0 3646.603760 8498.374023 439.623444
```
H100 GPU speed benchmarks:
```sh
** forward with kernel 32 and stride 16 **:
N Flash Triton-Flash Compressed Compressed-wo-Score
0 2048.0 0.259488 0.297152 0.485920 0.103232
1 4096.0 0.847376 1.030400 0.710208 0.217760
2 8192.0 3.044016 3.875840 1.607360 0.516016
3 16384.0 11.823104 14.829360 4.970272 1.440288
4 32768.0 46.204750 57.527809 15.004992 4.584736
5 65536.0 187.324249 227.909958 53.009087 16.134224
6 131072.0 810.707214 910.106873 191.245728 60.154270
** backward with kernel 32 and stride 16 **:
N Flash Triton-Flash Compressed
0 2048.0 0.797728 1.090640 0.283104
1 4096.0 2.547088 3.834592 0.550464
2 8192.0 9.021520 14.421088 1.249184
3 16384.0 34.159508 58.793377 3.743440
4 32768.0 136.844070 233.447708 12.640032
5 65536.0 537.559814 929.360229 46.054817
6 131072.0 2135.629883 3782.351562 175.587296
```
All the speed benchmarks above were tested with 64 query heads, 4 key/value heads, and a head dimension of 128.
## Contributing
Contributions are welcome! Please open an issue to discuss major changes.
## Contact
For any questions or feedback, please feel free to contact laixunhao@pku.edu.cn.
## Citations
```bibtex
@inproceedings{Yuan2025NativeSA,
title = {Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention},
author = {Jingyang Yuan and Huazuo Gao and Damai Dai and Junyu Luo and Liang Zhao and Zhengyan Zhang and Zhenda Xie and Y. X. Wei and Lean Wang and Zhiping Xiao and Yuqing Wang and Chong Ruan and Ming Zhang and Wenfeng Liang and Wangding Zeng},
year = {2025},
url = {https://api.semanticscholar.org/CorpusID:276408911}
}
```
|
https://github.com/hedge-dev/XenonRecomp
|
XenonRecomp
A tool for recompiling Xbox 360 games to native executables.
Languages: C++ (88.7%), C (10.1%), CMake (1.2%)
XenonAnalyse
XenonAnalyse
XenonRecomp
XenonRecomp
XenonTests
XenonTests
XenonUtils
XenonUtils
thirdparty
thirdparty
...
.editorconfig
.editorconfig
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
CMakeSettings.json
CMakeSettings.json
> README.md
# XenonRecomp
XenonRecomp is a tool that converts Xbox 360 executables into C++ code, which can then be recompiled for any platform. Currently, it only supports x86 platforms due to the use of x86 intrinsics.
This project was heavily inspired by [N64: Recompiled](https://github.com/N64Recomp/N64Recomp), a similar tool for N64 executables.
**DISCLAIMER:** This project does not provide a runtime implementation. It only converts the game code to C++, which is not going to function correctly without a runtime backing it. **Making the game work is your responsibility.**
## Implementation Details
### Instructions
The instructions are directly converted without any effort to make them resemble decompiled code, meaning the output is not very human-readable. The CPU state is passed as an argument to every PPC function, which includes definitions for every PPC register and their current values at the time of execution. The second argument is the base address pointer, as the Xbox 360 CPU uses 32-bit pointers.
A good amount of PPC instructions are implemented, with missing ones primarily being variants of already implemented instructions. Some instructions, like the D3D unpack/pack instructions, do not support all operand types. When a missing case is encountered, a warning is generated, or a debug break is inserted into the converted C++ code.
The instruction implementations operate on little-endian values. However, since the Xbox 360 is a big-endian machine, the memory load instructions swap endianness when reading values, and memory store instructions reverse it to big-endian before writing. All the memory loads and stores are marked volatile to prevent Clang from doing unsafe code reordering.
Vector registers' endianness handling is more complicated. Instead of swapping individual 32-bit elements, the recompiler chooses to reverse the entire 16-byte vector. Instructions must account for this reversed order, such as using the WZY components instead of XYZ in dot products or requiring reversed arguments for vector pack instructions.
The FPU expects denormalized numbers to remain unmodified, while VMX instructions always flush them. This is managed by storing the current floating-point state in the CPU state struct and enabling or disabling denormal flushing as necessary before executing each instruction.
Most VMX instructions are implemented using x86 intrinsics. Luckily, the number of AVX intrinsics used is relatively low, so adding support for other architectures using libraries like [SIMD Everywhere](https://github.com/simd-everywhere/simde) might be possible.
### MMIO
MMIO, which is typically used for hardware operations such as XMA decoding, is currently unimplemented. There is an unfinished attempt to implement MMIO, but supporting it may be non-trivial and could require advanced analysis of instructions.
### Indirect Functions
Virtual function calls are resolved by creating a "perfect hash table" at runtime, where dereferencing a 64-bit pointer (using the original instruction address multiplied by 2) gives the address of the recompiled function. This was previously implemented by creating an 8 GB virtual allocation, but it had too much memory pressure. Now it relies on function addresses being placed after the valid XEX memory region in the base memory pointer. These regions are exported as macros in the output `ppc_config.h` file.
### Jump Tables
Jump tables, at least in older Xbox 360 binaries, often have predictable assembly patterns, making them easy to detect statically without needing a virtual machine. XenonAnalyse has logic for detecting jump tables in Sonic Unleashed, though variations in other games (likely due to updates in the Xbox 360 compiler) may require modifications to the detection logic. Currently, there is no fully generic solution for handling jump tables, so updates to the detection logic may be needed for other games.
The typical way to find jump tables is by searching for the `mtctr r0` instruction. It will almost always be followed with a `bctr`, with the previous instructions computing the jump address.
XenonAnalyse generates a TOML file containing detected jump tables, which can be referenced in the main TOML config file. This allows the recompiler to generate real switch cases for these jump tables.
### Function Boundary Analysis
XenonAnalyse includes a function boundary analyzer that works well in most cases. Functions with stack space have their boundaries defined in the `.pdata` segment of the XEX. For functions not found in this segment, the analyzer detects the start of functions by searching for branch link instructions, and determines their length via static analysis.
However, the analyzer struggles with functions containing jump tables, since they look like tail calls without enough information. While there is currently no solution for this, it might be relatively simple to extend the function analyzer to account for jump tables defined in the TOML file. As a workaround, the recompiler TOML file allows users to manually define function boundaries.
### Exceptions
The recompiler currently does not support exceptions. This is challenging due to the use of the link register and the fact that exception handlers can jump to arbitrary code locations.
### setjmp
`setjmp` and `longjmp` are implemented by redirecting them to native implementations. Thanks to the Xbox 360's large number of vector registers, the guest CPU state struct is large enough to hold the x86 CPU state and potentially states from other architectures.
### Optimizations
Since Xbox 360 binaries typically follow a stable ABI, we can make certain assumptions about code structure, allowing the Clang compiler to generate better code. Several optimization options are available in the recompiler, but it's recommended to test them only after having a successfully functioning recompilation.
The link register can be skipped assuming the game does not utilize exceptions, as the whole process of recompilation already takes care of function return behavior.
The following registers, assuming the game doesn't violate the ABI, can be safely converted into local variables, as they never leave the function scope:
* Count register
* XER
* Reserved register
* Condition registers
* Non argument registers
* Non volatile registers
The local variable optimization particularly introduces the most improvements, as the calls to the register restore/save functions can be completely removed, and the redundant stores to the PPC context struct can be eliminated. In [Unleashed Recompiled](https://github.com/hedge-dev/UnleashedRecomp), the executable size decreases by around 20 MB with these optimizations, and frame times are reduced by several milliseconds.
### Patch Mechanisms
XenonRecomp defines PPC functions in a way that makes them easy to hook, using techniques in the Clang compiler. By aliasing a PPC function to an "implementation function" and marking the original function as weakly linked, users can override it with a custom implementation while retaining access to the original function:
```cpp
PPC_FUNC_IMPL(__imp__sub_XXXXXXXX);
PPC_FUNC(sub_XXXXXXXX)
{
__imp__sub_XXXXXXXX(ctx, base);
}
```
Additionally, mid-asm hooks can be inserted directly into the translated C++ code at specific instruction addresses. The recompiler inserts these function calls, and users are responsible for implementing them in their recompilation project. The linker resolves them during compilation.
## Usage
### XenonAnalyse
XenonAnalyse, when used as a command-line application, allows an XEX file to be passed as an input argument to output a TOML file containing all the detected jump tables in the executable:
```
XenonAnalyse [input XEX file path] [output jump table TOML file path]
```
However, as explained in the earlier sections, due to variations between games, additional support may be needed to handle different patterns.
[An example jump table TOML file can be viewed in the Unleashed Recompiled repository.](https://github.com/hedge-dev/UnleashedRecomp/blob/main/UnleashedRecompLib/config/SWA_switch_tables.toml)
### XenonRecomp
XenonRecomp accepts a TOML file with recompiler configurations and the path to the `ppc_context.h` file located in the XenonUtils directory:
```
XenonRecomp [input TOML file path] [input PPC context header file path]
```
[An example recompiler TOML file can be viewed in the Unleashed Recompiled repository.](https://github.com/hedge-dev/UnleashedRecomp/blob/main/UnleashedRecompLib/config/SWA.toml)
#### Main
```toml
[main]
file_path = "../private/default.xex"
patch_file_path = "../private/default.xexp"
patched_file_path = "../private/default_patched.xex"
out_directory_path = "../ppc"
switch_table_file_path = "SWA_switch_tables.toml"
```
All the paths are relative to the directory where the TOML file is stored.
Property|Description
-|-
file_path|Path to the XEX file.
patch_file_path|Path to the XEXP file. This is not required if the game has no title updates.
patched_file_path|Path to the patched XEX file. XenonRecomp will create this file automatically if it is missing and reuse it in subsequent recompilations. It does nothing if no XEXP file is specified. You can pass this output file to XenonAnalyse.
out_directory_path|Path to the directory that will contain the output C++ code. This directory must exist before running the recompiler.
switch_table_file_path|Path to the TOML file containing the jump table definitions. The recompiler uses this file to convert jump tables to real switch cases.
#### Optimizations
```toml
skip_lr = false
skip_msr = false
ctr_as_local = false
xer_as_local = false
reserved_as_local = false
cr_as_local = false
non_argument_as_local = false
non_volatile_as_local = false
```
Enables or disables various optimizations explained earlier in the documentation. It is recommended not to enable these optimizations until you have a successfully running recompilation.
#### Register Restore & Save Functions
```toml
restgprlr_14_address = 0x831B0B40
savegprlr_14_address = 0x831B0AF0
restfpr_14_address = 0x831B144C
savefpr_14_address = 0x831B1400
restvmx_14_address = 0x831B36E8
savevmx_14_address = 0x831B3450
restvmx_64_address = 0x831B377C
savevmx_64_address = 0x831B34E4
```
Xbox 360 binaries feature specialized register restore & save functions that act similarly to switch case fallthroughs. Every function that utilizes non-volatile registers either has an inlined version of these functions or explicitly calls them. The recompiler requires the starting address of each restore/save function in the TOML file to recompile them correctly. These functions could likely be auto-detected, but there is currently no mechanism for it.
Property|Description|Byte Pattern
-|-|-
restgprlr_14_address|Start address of the `__restgprlr_14` function. It starts with `ld r14, -0x98(r1)`, repeating the same operation for the rest of the non-volatile registers and restoring the link register at the end.|`e9 c1 ff 68`
savegprlr_14_address|Start address of the `__savegprlr_14` function. It starts with `std r14, -0x98(r1)`, repeating the same operation for the rest of the non-volatile registers and saving the link register at the end.|`f9 c1 ff 68`
restfpr_14_address|Start address of the `__restfpr_14` function. It starts with `lfd f14, -0x90(r12)`, repeating the same operation for the rest of the non-volatile FPU registers.|`c9 cc ff 70`
savefpr_14_address|Start address of the `__savefpr_14` function. It starts with `stfd r14, -0x90(r12)`, repeating the same operation for the rest of the non-volatile FPU registers.|`d9 cc ff 70`
restvmx_14_address|Start address of the `__restvmx_14` function. It starts with `li r11, -0x120` and `lvx v14, r11, r12`, repeating the same operation for the rest of the non-volatile VMX registers until `v31`.|`39 60 fe e0 7d cb 60 ce`
savevmx_14_address|Start address of the `__savevmx_14` function. It starts with `li r11, -0x120` and `stvx v14, r11, r12`, repeating the same operation for the rest of the non-volatile VMX registers until `v31`.|`39 60 fe e0 7d cb 61 ce`
restvmx_64_address|Start address of the `__restvmx_64` function. It starts with `li r11, -0x400` and `lvx128 v64, r11, r12`, repeating the same operation for the rest of the non-volatile VMX registers.|`39 60 fc 00 10 0b 60 cb`
savevmx_64_address|Start address of the `__savevmx_64` function. It starts with `li r11, -0x400` and `stvx128 v64, r11, r12`, repeating the same operation for the rest of the non-volatile VMX registers.|`39 60 fc 00 10 0b 61 cb`
#### longjmp & setjmp
```toml
longjmp_address = 0x831B6790
setjmp_address = 0x831B6AB0
```
These are addresses for the `longjmp` and `setjmp` functions in the executable. The recompiler directly redirects these functions to native versions. The implementation of these functions might vary between games. In some cases, you might find `longjmp` by looking for calls to `RtlUnwind`, and `setjmp` typically appears just after it.
If the game does not use these functions, you can remove the properties from the TOML file.
#### Explicit Function Boundaries
```toml
functions = [
{ address = 0x824E7EF0, size = 0x98 },
{ address = 0x824E7F28, size = 0x60 },
]
```
You can define function boundaries explicitly using the `functions` property if XenonAnalyse fails to analyze them correctly, for example, with functions containing jump tables.
#### Invalid Instruction Skips
```toml
invalid_instructions = [
{ data = 0x00000000, size = 4 }, # Padding
{ data = 0x831B1C90, size = 8 }, # C++ Frame Handler
{ data = 0x8324B3BC, size = 8 }, # C Specific Frame Handler
{ data = 0x831C8B50, size = 8 },
{ data = 0x00485645, size = 44 } # End of .text
]
```
In the `invalid_instructions` property, you can define 32-bit integer values that instruct the recompiler to skip over certain bytes when it encounters them. For example, in Unleashed Recompiled, these are used to skip over exception handling data, which is placed between functions but is not valid code.
#### Mid-asm Hooks
```toml
[[midasm_hook]]
name = "IndexBufferLengthMidAsmHook"
address = 0x82E26244
registers = ["r3"]
```
```cpp
void IndexBufferLengthMidAsmHook(PPCRegister& r3)
{
// ...
}
```
You can define multiple mid-asm hooks in the TOML file, allowing the recompiler to insert function calls at specified addresses. When implementing them in your recompilation project, the linker will resolve the calls automatically.
Property|Description
-|-
name|Function name of the mid-asm hook. You can reuse function names to place the same implementation at multiple addresses. Otherwise, unique implementations must have unique names.
address|Address of the instruction where the function call will be placed. This does not overwrite the instruction at the specified address.
registers|Registers to pass as arguments to the mid-asm hook. This is a list of registers because the local variable optimization does not keep optimized registers within the PPC context struct.
return|Set to `true` to indicate that the function where the hook was inserted should immediately return after calling the mid-asm hook.
return_on_true|Set to `true` to indicate that the function should return if the mid-asm hook call returns `true`.
return_on_false|Set to `true` to indicate that the function should return if the mid-asm hook call returns `false`.
jump_address|The address to jump to immediately after calling the mid-asm hook. The address must be within the same function where the hook was placed.
jump_address_on_true|The address to jump to if the mid-asm hook returns `true`. The address must be within the same function where the hook was placed.
jump_address_on_false|The address to jump to if the mid-asm hook returns `false`. The address must be within the same function where the hook was placed.
after_instruction|Set to `true` to place the mid-asm hook immediately after the instruction, instead of before.
Certain properties are mutually exclusive. For example, you cannot use both `return` and `jump_address`, and direct or conditional returns/jumps cannot be mixed. The recompiler is going to show warnings if this is not followed.
### Tests
XenonRecomp can recompile Xenia's PPC tests and execute them through the XenonTests project in the repository. After building the tests using Xenia's build system, XenonRecomp can process the `src/xenia/cpu/ppc/testing/bin` directory as input, generating C++ files in the specified output directory:
```
XenonRecomp [input testing directory path] [input PPC context header file path] [output directory path]
```
Once the files are generated, refresh XenonTests' CMake cache to make them appear in the project. The tests can then be executed to compare the results of instructions against the expected values.
## Building
The project requires CMake 3.20 or later and Clang 18 or later to build. Since the repository includes submodules, ensure you clone it recursively.
Compilers other than Clang have not been tested and are not recommended, including for recompilation output. The project relies on compiler-specific intrinsics and techniques that may not function correctly on other compilers, and many optimization methods depend on Clang's code generation.
On Windows, you can use the clang-cl toolset and open the project in Visual Studio's CMake integration.
## Special Thanks
This project could not have been possible without the [Xenia](https://github.com/xenia-project/xenia) emulator, as many parts of the CPU code conversion process has been implemented by heavily referencing its PPC code translator. The project also uses code from [Xenia Canary](https://github.com/xenia-canary/xenia-canary) to patch XEX binaries.
|
https://github.com/ggml-org/whisper.cpp
|
whisper.cpp
Port of OpenAI's Whisper model in C/C++
Languages: C++ (48.7%), C (24.4%), Cuda (11.2%), Objective-C (4.5%), Metal (3.7%), Shell (2.2%)
.devops
.devops
.github/workflows
.github/workflows
bindings
bindings
ci
ci
cmake
cmake
...
.dockerignore
.dockerignore
.gitignore
.gitignore
AUTHORS
AUTHORS
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
> README.md
# whisper.cpp

[](https://github.com/ggml-org/whisper.cpp/actions)
[](https://opensource.org/licenses/MIT)
[](https://conan.io/center/whisper-cpp)
[](https://www.npmjs.com/package/whisper.cpp/)
Stable: [v1.7.6](https://github.com/ggml-org/whisper.cpp/releases/tag/v1.7.6) / [Roadmap](https://github.com/orgs/ggml-org/projects/4/)
High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisper) automatic speech recognition (ASR) model:
- Plain C/C++ implementation without dependencies
- Apple Silicon first-class citizen - optimized via ARM NEON, Accelerate framework, Metal and [Core ML](#core-ml-support)
- AVX intrinsics support for x86 architectures
- [VSX intrinsics support for POWER architectures](#power-vsx-intrinsics)
- Mixed F16 / F32 precision
- [Integer quantization support](#quantization)
- Zero memory allocations at runtime
- [Vulkan support](#vulkan-gpu-support)
- Support for CPU-only inference
- [Efficient GPU support for NVIDIA](#nvidia-gpu-support)
- [OpenVINO Support](#openvino-support)
- [Ascend NPU Support](#ascend-npu-support)
- [Moore Threads GPU Support](#moore-threads-gpu-support)
- [C-style API](https://github.com/ggml-org/whisper.cpp/blob/master/include/whisper.h)
- [Voice Activity Detection (VAD)](#voice-activity-detection-vad)
Supported platforms:
- [x] Mac OS (Intel and Arm)
- [x] [iOS](examples/whisper.objc)
- [x] [Android](examples/whisper.android)
- [x] [Java](bindings/java/README.md)
- [x] Linux / [FreeBSD](https://github.com/ggml-org/whisper.cpp/issues/56#issuecomment-1350920264)
- [x] [WebAssembly](examples/whisper.wasm)
- [x] Windows ([MSVC](https://github.com/ggml-org/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggml-org/whisper.cpp/issues/168))
- [x] [Raspberry Pi](https://github.com/ggml-org/whisper.cpp/discussions/166)
- [x] [Docker](https://github.com/ggml-org/whisper.cpp/pkgs/container/whisper.cpp)
The entire high-level implementation of the model is contained in [whisper.h](include/whisper.h) and [whisper.cpp](src/whisper.cpp).
The rest of the code is part of the [`ggml`](https://github.com/ggml-org/ggml) machine learning library.
Having such a lightweight implementation of the model allows to easily integrate it in different platforms and applications.
As an example, here is a video of running the model on an iPhone 13 device - fully offline, on-device: [whisper.objc](examples/whisper.objc)
https://user-images.githubusercontent.com/1991296/197385372-962a6dea-bca1-4d50-bf96-1d8c27b98c81.mp4
You can also easily make your own offline voice assistant application: [command](examples/command)
https://user-images.githubusercontent.com/1991296/204038393-2f846eae-c255-4099-a76d-5735c25c49da.mp4
On Apple Silicon, the inference runs fully on the GPU via Metal:
https://github.com/ggml-org/whisper.cpp/assets/1991296/c82e8f86-60dc-49f2-b048-d2fdbd6b5225
## Quick start
First clone the repository:
```bash
git clone https://github.com/ggml-org/whisper.cpp.git
```
Navigate into the directory:
```
cd whisper.cpp
```
Then, download one of the Whisper [models](models/README.md) converted in [`ggml` format](#ggml-format). For example:
```bash
sh ./models/download-ggml-model.sh base.en
```
Now build the [whisper-cli](examples/cli) example and transcribe an audio file like this:
```bash
# build the project
cmake -B build
cmake --build build -j --config Release
# transcribe an audio file
./build/bin/whisper-cli -f samples/jfk.wav
```
---
For a quick demo, simply run `make base.en`.
The command downloads the `base.en` model converted to custom `ggml` format and runs the inference on all `.wav` samples in the folder `samples`.
For detailed usage instructions, run: `./build/bin/whisper-cli -h`
Note that the [whisper-cli](examples/cli) example currently runs only with 16-bit WAV files, so make sure to convert your input before running the tool.
For example, you can use `ffmpeg` like this:
```bash
ffmpeg -i input.mp3 -ar 16000 -ac 1 -c:a pcm_s16le output.wav
```
## More audio samples
If you want some extra audio samples to play with, simply run:
```
make -j samples
```
This will download a few more audio files from Wikipedia and convert them to 16-bit WAV format via `ffmpeg`.
You can download and run the other models as follows:
```
make -j tiny.en
make -j tiny
make -j base.en
make -j base
make -j small.en
make -j small
make -j medium.en
make -j medium
make -j large-v1
make -j large-v2
make -j large-v3
make -j large-v3-turbo
```
## Memory usage
| Model | Disk | Mem |
| ------ | ------- | ------- |
| tiny | 75 MiB | ~273 MB |
| base | 142 MiB | ~388 MB |
| small | 466 MiB | ~852 MB |
| medium | 1.5 GiB | ~2.1 GB |
| large | 2.9 GiB | ~3.9 GB |
## POWER VSX Intrinsics
`whisper.cpp` supports POWER architectures and includes code which
significantly speeds operation on Linux running on POWER9/10, making it
capable of faster-than-realtime transcription on underclocked Raptor
Talos II. Ensure you have a BLAS package installed, and replace the
standard cmake setup with:
```bash
# build with GGML_BLAS defined
cmake -B build -DGGML_BLAS=1
cmake --build build -j --config Release
./build/bin/whisper-cli [ .. etc .. ]
```
## Quantization
`whisper.cpp` supports integer quantization of the Whisper `ggml` models.
Quantized models require less memory and disk space and depending on the hardware can be processed more efficiently.
Here are the steps for creating and using a quantized model:
```bash
# quantize a model with Q5_0 method
cmake -B build
cmake --build build -j --config Release
./build/bin/quantize models/ggml-base.en.bin models/ggml-base.en-q5_0.bin q5_0
# run the examples as usual, specifying the quantized model file
./build/bin/whisper-cli -m models/ggml-base.en-q5_0.bin ./samples/gb0.wav
```
## Core ML support
On Apple Silicon devices, the Encoder inference can be executed on the Apple Neural Engine (ANE) via Core ML. This can result in significant
speed-up - more than x3 faster compared with CPU-only execution. Here are the instructions for generating a Core ML model and using it with `whisper.cpp`:
- Install Python dependencies needed for the creation of the Core ML model:
```bash
pip install ane_transformers
pip install openai-whisper
pip install coremltools
```
- To ensure `coremltools` operates correctly, please confirm that [Xcode](https://developer.apple.com/xcode/) is installed and execute `xcode-select --install` to install the command-line tools.
- Python 3.11 is recommended.
- MacOS Sonoma (version 14) or newer is recommended, as older versions of MacOS might experience issues with transcription hallucination.
- [OPTIONAL] It is recommended to utilize a Python version management system, such as [Miniconda](https://docs.conda.io/en/latest/miniconda.html) for this step:
- To create an environment, use: `conda create -n py311-whisper python=3.11 -y`
- To activate the environment, use: `conda activate py311-whisper`
- Generate a Core ML model. For example, to generate a `base.en` model, use:
```bash
./models/generate-coreml-model.sh base.en
```
This will generate the folder `models/ggml-base.en-encoder.mlmodelc`
- Build `whisper.cpp` with Core ML support:
```bash
# using CMake
cmake -B build -DWHISPER_COREML=1
cmake --build build -j --config Release
```
- Run the examples as usual. For example:
```text
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav
...
whisper_init_state: loading Core ML model from 'models/ggml-base.en-encoder.mlmodelc'
whisper_init_state: first run on a device may take a while ...
whisper_init_state: Core ML model loaded
system_info: n_threads = 4 / 10 | AVX = 0 | AVX2 = 0 | AVX512 = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 | COREML = 1 |
...
```
The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
Next runs are faster.
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggml-org/whisper.cpp/pull/566).
## OpenVINO support
On platforms that support [OpenVINO](https://github.com/openvinotoolkit/openvino), the Encoder inference can be executed
on OpenVINO-supported devices including x86 CPUs and Intel GPUs (integrated & discrete).
This can result in significant speedup in encoder performance. Here are the instructions for generating the OpenVINO model and using it with `whisper.cpp`:
- First, setup python virtual env. and install python dependencies. Python 3.10 is recommended.
Windows:
```powershell
cd models
python -m venv openvino_conv_env
openvino_conv_env\Scripts\activate
python -m pip install --upgrade pip
pip install -r requirements-openvino.txt
```
Linux and macOS:
```bash
cd models
python3 -m venv openvino_conv_env
source openvino_conv_env/bin/activate
python -m pip install --upgrade pip
pip install -r requirements-openvino.txt
```
- Generate an OpenVINO encoder model. For example, to generate a `base.en` model, use:
```
python convert-whisper-to-openvino.py --model base.en
```
This will produce ggml-base.en-encoder-openvino.xml/.bin IR model files. It's recommended to relocate these to the same folder as `ggml` models, as that
is the default location that the OpenVINO extension will search at runtime.
- Build `whisper.cpp` with OpenVINO support:
Download OpenVINO package from [release page](https://github.com/openvinotoolkit/openvino/releases). The recommended version to use is [2024.6.0](https://github.com/openvinotoolkit/openvino/releases/tag/2024.6.0). Ready to use Binaries of the required libraries can be found in the [OpenVino Archives](https://storage.openvinotoolkit.org/repositories/openvino/packages/2024.6/)
After downloading & extracting package onto your development system, set up required environment by sourcing setupvars script. For example:
Linux:
```bash
source /path/to/l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64/setupvars.sh
```
Windows (cmd):
```powershell
C:\Path\To\w_openvino_toolkit_windows_2023.0.0.10926.b4452d56304_x86_64\setupvars.bat
```
And then build the project using cmake:
```bash
cmake -B build -DWHISPER_OPENVINO=1
cmake --build build -j --config Release
```
- Run the examples as usual. For example:
```text
$ ./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/jfk.wav
...
whisper_ctx_init_openvino_encoder: loading OpenVINO model from 'models/ggml-base.en-encoder-openvino.xml'
whisper_ctx_init_openvino_encoder: first run on a device may take a while ...
whisper_openvino_init: path_model = models/ggml-base.en-encoder-openvino.xml, device = GPU, cache_dir = models/ggml-base.en-encoder-openvino-cache
whisper_ctx_init_openvino_encoder: OpenVINO model loaded
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 1 |
...
```
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
cached for the next run.
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggml-org/whisper.cpp/pull/1037).
## NVIDIA GPU support
With NVIDIA cards the processing of the models is done efficiently on the GPU via cuBLAS and custom CUDA kernels.
First, make sure you have installed `cuda`: https://developer.nvidia.com/cuda-downloads
Now build `whisper.cpp` with CUDA support:
```
cmake -B build -DGGML_CUDA=1
cmake --build build -j --config Release
```
or for newer NVIDIA GPU's (RTX 5000 series):
```
cmake -B build -DGGML_CUDA=1 -DCMAKE_CUDA_ARCHITECTURES="86"
cmake --build build -j --config Release
```
## Vulkan GPU support
Cross-vendor solution which allows you to accelerate workload on your GPU.
First, make sure your graphics card driver provides support for Vulkan API.
Now build `whisper.cpp` with Vulkan support:
```
cmake -B build -DGGML_VULKAN=1
cmake --build build -j --config Release
```
## BLAS CPU support via OpenBLAS
Encoder processing can be accelerated on the CPU via OpenBLAS.
First, make sure you have installed `openblas`: https://www.openblas.net/
Now build `whisper.cpp` with OpenBLAS support:
```
cmake -B build -DGGML_BLAS=1
cmake --build build -j --config Release
```
## Ascend NPU support
Ascend NPU provides inference acceleration via [`CANN`](https://www.hiascend.com/en/software/cann) and AI cores.
First, check if your Ascend NPU device is supported:
**Verified devices**
| Ascend NPU | Status |
|:-----------------------------:|:-------:|
| Atlas 300T A2 | Support |
Then, make sure you have installed [`CANN toolkit`](https://www.hiascend.com/en/software/cann/community) . The lasted version of CANN is recommanded.
Now build `whisper.cpp` with CANN support:
```
cmake -B build -DGGML_CANN=1
cmake --build build -j --config Release
```
Run the inference examples as usual, for example:
```
./build/bin/whisper-cli -f samples/jfk.wav -m models/ggml-base.en.bin -t 8
```
*Notes:*
- If you have trouble with Ascend NPU device, please create a issue with **[CANN]** prefix/tag.
- If you run successfully with your Ascend NPU device, please help update the table `Verified devices`.
## Moore Threads GPU support
With Moore Threads cards the processing of the models is done efficiently on the GPU via muBLAS and custom MUSA kernels.
First, make sure you have installed `MUSA SDK rc4.2.0`: https://developer.mthreads.com/sdk/download/musa?equipment=&os=&driverVersion=&version=4.2.0
Now build `whisper.cpp` with MUSA support:
```
cmake -B build -DGGML_MUSA=1
cmake --build build -j --config Release
```
or specify the architecture for your Moore Threads GPU. For example, if you have a MTT S80 GPU, you can specify the architecture as follows:
```
cmake -B build -DGGML_MUSA=1 -DMUSA_ARCHITECTURES="21"
cmake --build build -j --config Release
```
## FFmpeg support (Linux only)
If you want to support more audio formats (such as Opus and AAC), you can turn on the `WHISPER_FFMPEG` build flag to enable FFmpeg integration.
First, you need to install required libraries:
```bash
# Debian/Ubuntu
sudo apt install libavcodec-dev libavformat-dev libavutil-dev
# RHEL/Fedora
sudo dnf install libavcodec-free-devel libavformat-free-devel libavutil-free-devel
```
Then you can build the project as follows:
```bash
cmake -B build -D WHISPER_FFMPEG=yes
cmake --build build
```
Run the following example to confirm it's working:
```bash
# Convert an audio file to Opus format
ffmpeg -i samples/jfk.wav jfk.opus
# Transcribe the audio file
./build/bin/whisper-cli --model models/ggml-base.en.bin --file jfk.opus
```
## Docker
### Prerequisites
- Docker must be installed and running on your system.
- Create a folder to store big models & intermediate files (ex. /whisper/models)
### Images
We have two Docker images available for this project:
1. `ghcr.io/ggml-org/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
2. `ghcr.io/ggml-org/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
3. `ghcr.io/ggml-org/whisper.cpp:main-musa`: Same as `main` but compiled with MUSA support. (platforms: `linux/amd64`)
### Usage
```shell
# download model and persist it in a local folder
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "./models/download-ggml-model.sh base /models"
# transcribe an audio file
docker run -it --rm \
-v path/to/models:/models \
-v path/to/audios:/audios \
whisper.cpp:main "whisper-cli -m /models/ggml-base.bin -f /audios/jfk.wav"
# transcribe an audio file in samples folder
docker run -it --rm \
-v path/to/models:/models \
whisper.cpp:main "whisper-cli -m /models/ggml-base.bin -f ./samples/jfk.wav"
```
## Installing with Conan
You can install pre-built binaries for whisper.cpp or build it from source using [Conan](https://conan.io/). Use the following command:
```
conan install --requires="whisper-cpp/[*]" --build=missing
```
For detailed instructions on how to use Conan, please refer to the [Conan documentation](https://docs.conan.io/2/).
## Limitations
- Inference only
## Real-time audio input example
This is a naive example of performing real-time inference on audio from your microphone.
The [stream](examples/stream) tool samples the audio every half a second and runs the transcription continuously.
More info is available in [issue #10](https://github.com/ggml-org/whisper.cpp/issues/10).
You will need to have [sdl2](https://wiki.libsdl.org/SDL2/Installation) installed for it to work properly.
```bash
cmake -B build -DWHISPER_SDL2=ON
cmake --build build -j --config Release
./build/bin/whisper-stream -m ./models/ggml-base.en.bin -t 8 --step 500 --length 5000
```
https://user-images.githubusercontent.com/1991296/194935793-76afede7-cfa8-48d8-a80f-28ba83be7d09.mp4
## Confidence color-coding
Adding the `--print-colors` argument will print the transcribed text using an experimental color coding strategy
to highlight words with high or low confidence:
```bash
./build/bin/whisper-cli -m models/ggml-base.en.bin -f samples/gb0.wav --print-colors
```
<img width="965" alt="image" src="https://user-images.githubusercontent.com/1991296/197356445-311c8643-9397-4e5e-b46e-0b4b4daa2530.png">
## Controlling the length of the generated text segments (experimental)
For example, to limit the line length to a maximum of 16 characters, simply add `-ml 16`:
```text
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 16
whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:00.850] And so my
[00:00:00.850 --> 00:00:01.590] fellow
[00:00:01.590 --> 00:00:04.140] Americans, ask
[00:00:04.140 --> 00:00:05.660] not what your
[00:00:05.660 --> 00:00:06.840] country can do
[00:00:06.840 --> 00:00:08.430] for you, ask
[00:00:08.430 --> 00:00:09.440] what you can do
[00:00:09.440 --> 00:00:10.020] for your
[00:00:10.020 --> 00:00:11.000] country.
```
## Word-level timestamp (experimental)
The `--max-len` argument can be used to obtain word-level timestamps. Simply use `-ml 1`:
```text
$ ./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -ml 1
whisper_model_load: loading model from './models/ggml-base.en.bin'
...
system_info: n_threads = 4 / 10 | AVX2 = 0 | AVX512 = 0 | NEON = 1 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 |
main: processing './samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, timestamps = 1 ...
[00:00:00.000 --> 00:00:00.320]
[00:00:00.320 --> 00:00:00.370] And
[00:00:00.370 --> 00:00:00.690] so
[00:00:00.690 --> 00:00:00.850] my
[00:00:00.850 --> 00:00:01.590] fellow
[00:00:01.590 --> 00:00:02.850] Americans
[00:00:02.850 --> 00:00:03.300] ,
[00:00:03.300 --> 00:00:04.140] ask
[00:00:04.140 --> 00:00:04.990] not
[00:00:04.990 --> 00:00:05.410] what
[00:00:05.410 --> 00:00:05.660] your
[00:00:05.660 --> 00:00:06.260] country
[00:00:06.260 --> 00:00:06.600] can
[00:00:06.600 --> 00:00:06.840] do
[00:00:06.840 --> 00:00:07.010] for
[00:00:07.010 --> 00:00:08.170] you
[00:00:08.170 --> 00:00:08.190] ,
[00:00:08.190 --> 00:00:08.430] ask
[00:00:08.430 --> 00:00:08.910] what
[00:00:08.910 --> 00:00:09.040] you
[00:00:09.040 --> 00:00:09.320] can
[00:00:09.320 --> 00:00:09.440] do
[00:00:09.440 --> 00:00:09.760] for
[00:00:09.760 --> 00:00:10.020] your
[00:00:10.020 --> 00:00:10.510] country
[00:00:10.510 --> 00:00:11.000] .
```
## Speaker segmentation via tinydiarize (experimental)
More information about this approach is available here: https://github.com/ggml-org/whisper.cpp/pull/1058
Sample usage:
```py
# download a tinydiarize compatible model
./models/download-ggml-model.sh small.en-tdrz
# run as usual, adding the "-tdrz" command-line argument
./build/bin/whisper-cli -f ./samples/a13.wav -m ./models/ggml-small.en-tdrz.bin -tdrz
...
main: processing './samples/a13.wav' (480000 samples, 30.0 sec), 4 threads, 1 processors, lang = en, task = transcribe, tdrz = 1, timestamps = 1 ...
...
[00:00:00.000 --> 00:00:03.800] Okay Houston, we've had a problem here. [SPEAKER_TURN]
[00:00:03.800 --> 00:00:06.200] This is Houston. Say again please. [SPEAKER_TURN]
[00:00:06.200 --> 00:00:08.260] Uh Houston we've had a problem.
[00:00:08.260 --> 00:00:11.320] We've had a main beam up on a volt. [SPEAKER_TURN]
[00:00:11.320 --> 00:00:13.820] Roger main beam interval. [SPEAKER_TURN]
[00:00:13.820 --> 00:00:15.100] Uh uh [SPEAKER_TURN]
[00:00:15.100 --> 00:00:18.020] So okay stand, by thirteen we're looking at it. [SPEAKER_TURN]
[00:00:18.020 --> 00:00:25.740] Okay uh right now uh Houston the uh voltage is uh is looking good um.
[00:00:27.620 --> 00:00:29.940] And we had a a pretty large bank or so.
```
## Karaoke-style movie generation (experimental)
The [whisper-cli](examples/cli) example provides support for output of karaoke-style movies, where the
currently pronounced word is highlighted. Use the `-owts` argument and run the generated bash script.
This requires to have `ffmpeg` installed.
Here are a few _"typical"_ examples:
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/jfk.wav -owts
source ./samples/jfk.wav.wts
ffplay ./samples/jfk.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337465-dbee4b5e-9aeb-48a3-b1c6-323ac4db5b2c.mp4
---
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/mm0.wav -owts
source ./samples/mm0.wav.wts
ffplay ./samples/mm0.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337504-cc8fd233-0cb7-4920-95f9-4227de3570aa.mp4
---
```bash
./build/bin/whisper-cli -m ./models/ggml-base.en.bin -f ./samples/gb0.wav -owts
source ./samples/gb0.wav.wts
ffplay ./samples/gb0.wav.mp4
```
https://user-images.githubusercontent.com/1991296/199337538-b7b0c7a3-2753-4a88-a0cd-f28a317987ba.mp4
---
## Video comparison of different models
Use the [scripts/bench-wts.sh](https://github.com/ggml-org/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
```bash
./scripts/bench-wts.sh samples/jfk.wav
ffplay ./samples/jfk.wav.all.mp4
```
https://user-images.githubusercontent.com/1991296/223206245-2d36d903-cf8e-4f09-8c3b-eb9f9c39d6fc.mp4
---
## Benchmarks
In order to have an objective comparison of the performance of the inference across different system configurations,
use the [whisper-bench](examples/bench) tool. The tool simply runs the Encoder part of the model and prints how much time it
took to execute it. The results are summarized in the following Github issue:
[Benchmark results](https://github.com/ggml-org/whisper.cpp/issues/89)
Additionally a script to run whisper.cpp with different models and audio files is provided [bench.py](scripts/bench.py).
You can run it with the following command, by default it will run against any standard model in the models folder.
```bash
python3 scripts/bench.py -f samples/jfk.wav -t 2,4,8 -p 1,2
```
It is written in python with the intention of being easy to modify and extend for your benchmarking use case.
It outputs a csv file with the results of the benchmarking.
## `ggml` format
The original models are converted to a custom binary format. This allows to pack everything needed into a single file:
- model parameters
- mel filters
- vocabulary
- weights
You can download the converted models using the [models/download-ggml-model.sh](models/download-ggml-model.sh) script
or manually from here:
- https://huggingface.co/ggerganov/whisper.cpp
For more details, see the conversion script [models/convert-pt-to-ggml.py](models/convert-pt-to-ggml.py) or [models/README.md](models/README.md).
## [Bindings](https://github.com/ggml-org/whisper.cpp/discussions/categories/bindings)
- [x] Rust: [tazz4843/whisper-rs](https://github.com/tazz4843/whisper-rs) | [#310](https://github.com/ggml-org/whisper.cpp/discussions/310)
- [x] JavaScript: [bindings/javascript](bindings/javascript) | [#309](https://github.com/ggml-org/whisper.cpp/discussions/309)
- React Native (iOS / Android): [whisper.rn](https://github.com/mybigday/whisper.rn)
- [x] Go: [bindings/go](bindings/go) | [#312](https://github.com/ggml-org/whisper.cpp/discussions/312)
- [x] Java:
- [GiviMAD/whisper-jni](https://github.com/GiviMAD/whisper-jni)
- [x] Ruby: [bindings/ruby](bindings/ruby) | [#507](https://github.com/ggml-org/whisper.cpp/discussions/507)
- [x] Objective-C / Swift: [ggml-org/whisper.spm](https://github.com/ggml-org/whisper.spm) | [#313](https://github.com/ggml-org/whisper.cpp/discussions/313)
- [exPHAT/SwiftWhisper](https://github.com/exPHAT/SwiftWhisper)
- [x] .NET: | [#422](https://github.com/ggml-org/whisper.cpp/discussions/422)
- [sandrohanea/whisper.net](https://github.com/sandrohanea/whisper.net)
- [NickDarvey/whisper](https://github.com/NickDarvey/whisper)
- [x] Python: | [#9](https://github.com/ggml-org/whisper.cpp/issues/9)
- [stlukey/whispercpp.py](https://github.com/stlukey/whispercpp.py) (Cython)
- [AIWintermuteAI/whispercpp](https://github.com/AIWintermuteAI/whispercpp) (Updated fork of aarnphm/whispercpp)
- [aarnphm/whispercpp](https://github.com/aarnphm/whispercpp) (Pybind11)
- [abdeladim-s/pywhispercpp](https://github.com/abdeladim-s/pywhispercpp) (Pybind11)
- [x] R: [bnosac/audio.whisper](https://github.com/bnosac/audio.whisper)
- [x] Unity: [macoron/whisper.unity](https://github.com/Macoron/whisper.unity)
## XCFramework
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS,
and macOS. It can be used in Swift projects without the need to compile the
library from source. For example, the v1.7.5 version of the XCFramework can be
used as follows:
```swift
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "Whisper",
targets: [
.executableTarget(
name: "Whisper",
dependencies: [
"WhisperFramework"
]),
.binaryTarget(
name: "WhisperFramework",
url: "https://github.com/ggml-org/whisper.cpp/releases/download/v1.7.5/whisper-v1.7.5-xcframework.zip",
checksum: "c7faeb328620d6012e130f3d705c51a6ea6c995605f2df50f6e1ad68c59c6c4a"
)
]
)
```
## Voice Activity Detection (VAD)
Support for Voice Activity Detection (VAD) can be enabled using the `--vad`
argument to `whisper-cli`. In addition to this option a VAD model is also
required.
The way this works is that first the audio samples are passed through
the VAD model which will detect speech segments. Using this information the
only the speech segments that are detected are extracted from the original audio
input and passed to whisper for processing. This reduces the amount of audio
data that needs to be processed by whisper and can significantly speed up the
transcription process.
The following VAD models are currently supported:
### Silero-VAD
[Silero-vad](https://github.com/snakers4/silero-vad) is a lightweight VAD model
written in Python that is fast and accurate.
Models can be downloaded by running the following command on Linux or MacOS:
```console
$ ./models/download-vad-model.sh silero-v5.1.2
Downloading ggml model silero-v5.1.2 from 'https://huggingface.co/ggml-org/whisper-vad' ...
ggml-silero-v5.1.2.bin 100%[==============================================>] 864.35K --.-KB/s in 0.04s
Done! Model 'silero-v5.1.2' saved in '/path/models/ggml-silero-v5.1.2.bin'
You can now use it like this:
$ ./build/bin/whisper-cli -vm /path/models/ggml-silero-v5.1.2.bin --vad -f samples/jfk.wav -m models/ggml-base.en.bin
```
And the following command on Windows:
```console
> .\models\download-vad-model.cmd silero-v5.1.2
Downloading vad model silero-v5.1.2...
Done! Model silero-v5.1.2 saved in C:\Users\danie\work\ai\whisper.cpp\ggml-silero-v5.1.2.bin
You can now use it like this:
C:\path\build\bin\Release\whisper-cli.exe -vm C:\path\ggml-silero-v5.1.2.bin --vad -m models/ggml-base.en.bin -f samples\jfk.wav
```
To see a list of all available models, run the above commands without any
arguments.
This model can be also be converted manually to ggml using the following command:
```console
$ python3 -m venv venv && source venv/bin/activate
$ (venv) pip install silero-vad
$ (venv) $ python models/convert-silero-vad-to-ggml.py --output models/silero.bin
Saving GGML Silero-VAD model to models/silero-v5.1.2-ggml.bin
```
And it can then be used with whisper as follows:
```console
$ ./build/bin/whisper-cli \
--file ./samples/jfk.wav \
--model ./models/ggml-base.en.bin \
--vad \
--vad-model ./models/silero-v5.1.2-ggml.bin
```
### VAD Options
* --vad-threshold: Threshold probability for speech detection. A probability
for a speech segment/frame above this threshold will be considered as speech.
* --vad-min-speech-duration-ms: Minimum speech duration in milliseconds. Speech
segments shorter than this value will be discarded to filter out brief noise or
false positives.
* --vad-min-silence-duration-ms: Minimum silence duration in milliseconds. Silence
periods must be at least this long to end a speech segment. Shorter silence
periods will be ignored and included as part of the speech.
* --vad-max-speech-duration-s: Maximum speech duration in seconds. Speech segments
longer than this will be automatically split into multiple segments at silence
points exceeding 98ms to prevent excessively long segments.
* --vad-speech-pad-ms: Speech padding in milliseconds. Adds this amount of padding
before and after each detected speech segment to avoid cutting off speech edges.
* --vad-samples-overlap: Amount of audio to extend from each speech segment into
the next one, in seconds (e.g., 0.10 = 100ms overlap). This ensures speech isn't
cut off abruptly between segments when they're concatenated together.
## Examples
There are various examples of using the library for different projects in the [examples](examples) folder.
Some of the examples are even ported to run in the browser using WebAssembly. Check them out!
| Example | Web | Description |
| --------------------------------------------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| [whisper-cli](examples/cli) | [whisper.wasm](examples/whisper.wasm) | Tool for translating and transcribing audio using Whisper |
| [whisper-bench](examples/bench) | [bench.wasm](examples/bench.wasm) | Benchmark the performance of Whisper on your machine |
| [whisper-stream](examples/stream) | [stream.wasm](examples/stream.wasm) | Real-time transcription of raw microphone capture |
| [whisper-command](examples/command) | [command.wasm](examples/command.wasm) | Basic voice assistant example for receiving voice commands from the mic |
| [whisper-server](examples/server) | | HTTP transcription server with OAI-like API |
| [whisper-talk-llama](examples/talk-llama) | | Talk with a LLaMA bot |
| [whisper.objc](examples/whisper.objc) | | iOS mobile application using whisper.cpp |
| [whisper.swiftui](examples/whisper.swiftui) | | SwiftUI iOS / macOS application using whisper.cpp |
| [whisper.android](examples/whisper.android) | | Android mobile application using whisper.cpp |
| [whisper.nvim](examples/whisper.nvim) | | Speech-to-text plugin for Neovim |
| [generate-karaoke.sh](examples/generate-karaoke.sh) | | Helper script to easily [generate a karaoke video](https://youtu.be/uj7hVta4blM) of raw audio capture |
| [livestream.sh](examples/livestream.sh) | | [Livestream audio transcription](https://github.com/ggml-org/whisper.cpp/issues/185) |
| [yt-wsp.sh](examples/yt-wsp.sh) | | Download + transcribe and/or translate any VOD [(original)](https://gist.github.com/DaniruKun/96f763ec1a037cc92fe1a059b643b818) |
| [wchess](examples/wchess) | [wchess.wasm](examples/wchess) | Voice-controlled chess |
## [Discussions](https://github.com/ggml-org/whisper.cpp/discussions)
If you have any kind of feedback about this project feel free to use the Discussions section and open a new topic.
You can use the [Show and tell](https://github.com/ggml-org/whisper.cpp/discussions/categories/show-and-tell) category
to share your own projects that use `whisper.cpp`. If you have a question, make sure to check the
[Frequently asked questions (#126)](https://github.com/ggml-org/whisper.cpp/discussions/126) discussion.
|
https://github.com/VirtualBox/virtualbox
|
virtualbox
Source code for Oracle VirtualBox
Languages: C (61.9%), C++ (25.8%), Assembly (6.0%), Python (2.7%), Perl (1.3%), Shell (0.5%)
.github/ISSUE_TEMPLATE
.github/ISSUE_TEMPLATE
debian
debian
doc
doc
include
include
src
src
...
.dir-locals.el
.dir-locals.el
.gitignore
.gitignore
.gitmodules
.gitmodules
.scm-settings
.scm-settings
CONTRIBUTING.md
CONTRIBUTING.md
> README.md
# Oracle VirtualBox
VirtualBox is a general-purpose full virtualization software for x86_64
hardware (with version 7.1 additionally for macOS/Arm), targeted at laptop,
desktop, server and embedded use.
It features a very user friendly graphical user interface and is available for
many popular operating systems (Linux, Windows, macOS and Solaris). Flexible
networking setup and interactive performance are the strong points.
Anyone with the need to run multiple operating systems simultaneously with some
basic knowledge about PCs and operating system installation can use it to
reduce effort with a large number of tasks including software testing.
## Getting started
VirtualBox is a complex product with multiple dependencies, some of them
specific to the operating system on which you want to run it.
The basics for building VirtualBox are described on the [build
instructions](https://www.virtualbox.org/wiki/Build_instructions) page.
## Documentation
The [VirtualBox User
Guide](https://docs.oracle.com/en/virtualization/virtualbox/index.html)
contains all information relevant for users, including the product features and
their configuration.
For developers it is recommended to start with the [technical
documentation](https://www.virtualbox.org/wiki/Technical_documentation) which
contains links to a broad collection of pages related to development, covering
many aspects of the project and its features.
## Examples
Tutorials on how to install and use Oracle VirtualBox are available at
[Learn to Install Oracle VirtualBox and Run Virtual Machines](https://blogs.oracle.com/linux/post/learn-to-install-oracle-virtualbox-and-run-virtual-machines)
and [Use Oracle VirtualBox on Oracle Linux](https://docs.oracle.com/en/learn/ol-vbox/index.html).
## Help
Oracle customers with a support contract covering Oracle VirtualBox should
reach out to [Oracle Support](https://www.oracle.com/support/).
Everyone can use the [VirtualBox Forums](https://forums.virtualbox.org/)
for questions about the product or discussing its functionality. Open an [issue](https://github.com/VirtualBox/virtualbox/issues)
for bug reports or request for enhancements. Report a security vulnerability
according to the [Reporting Vulnerabilities Guide](https://www.oracle.com/corporate/security-practices/assurance/vulnerability/reporting.html).
## Contributing
This project welcomes contributions from the community. Before submitting a
pull request, please [review our contribution guide](./CONTRIBUTING.md)
## Security
Please consult the [security guide](./SECURITY.md) for our responsible security vulnerability disclosure process.
## License
The correct copyright notice format for both documentation and software is
Copyright (C) [year-]year Oracle and/or its affiliates.
This file is part of VirtualBox base platform packages, as
available from https://www.virtualbox.org.
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation, in version 3 of the
License.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, see <https://www.gnu.org/licenses>.
You must include the year the content was first released (on any platform) and
the most recent year in which it was revised:
Copyright (C) 2025 Oracle and/or its affiliates.
Released under the GNU General Public License v3.0 as shown at
[COPYING](./COPYING) which contains clarifications regarding allowed licenses
for other code using parts of the project which are covered by multiple
licenses.
|
https://github.com/facebookresearch/volumetric_primitives
|
volumetric_primitives
This repository contains the implementation of our novel approach associated with the paper "Don't Splat Your Gaussians" to modeling and rendering scattering and emissive media using volumetric primitives with the Mitsuba renderer.
Languages: Python (100.0%)
examples
examples
resources
resources
volprim
volprim
...
.gitignore
.gitignore
CODE_OF_CONDUCT.md
CODE_OF_CONDUCT.md
CONTRIBUTING.md
CONTRIBUTING.md
LICENSE.md
LICENSE.md
README.md
README.md
> README.md
# Don’t Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media

Copyright (c) Meta Platforms, Inc. and affiliates. This source code is licensed under the MIT license found in the LICENSE file in the root directory of this source tree.
This repository contains the implementation of our novel approach associated with the paper ["Don't Splat Your Gaussians"](https://dl.acm.org/doi/10.1145/3711853) to modeling and rendering scattering and emissive media using volumetric primitives with the Mitsuba renderer. This repository contains the implementation of ray tracing volumetric primitives using the Mitsuba renderer, as described in our research paper. The project provides utility functions for IO, benchmarking, and several integrator algorithms for
rendering volumetric primitives.
Abstract: *Efficient scene representations are essential for many computer graphics applications. A general unified representation that can handle both surfaces and volumes simultaneously, remains a research challenge. Inspired by recent methods for scene reconstruction that leverage mixtures of 3D Gaussians to model radiance fields, we formalize and generalize the modeling of scattering and emissive media using mixtures of simple kernel-based volumetric primitives. We introduce closed-form solutions for transmittance and free-flight distance sampling for different kernels, and propose several optimizations to use our method efficiently within any off-the-shelf volumetric path tracer. We demonstrate our method as a compact and efficient alternative to other forms of volume modeling for forward and inverse rendering of scattering media. Furthermore, we adapt and showcase our method in radiance field optimization and rendering, providing additional flexibility compared to current state of the art given its ray-tracing formulation. We also introduce the Epanechnikov kernel and demonstrate its potential as an efficient alternative to the traditionally-used Gaussian kernel in scene reconstruction tasks. The versatility and physically-based nature of our approach allows us to go beyond radiance fields and bring to kernel-based modeling and rendering any path-tracing enabled functionality such as scattering, relighting and complex camera models.*
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@article{10.1145/3711853,
author = {Condor, Jorge and Speierer, Sebastien and Bode, Lukas and Bozic, Aljaz and Green, Simon and Didyk, Piotr and Jarabo, Adrian},
title = {Don't Splat your Gaussians: Volumetric Ray-Traced Primitives for Modeling and Rendering Scattering and Emissive Media},
year = {2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {0730-0301},
url = {https://doi.org/10.1145/3711853},
doi = {10.1145/3711853},
note = {Just Accepted},
journal = {ACM Trans. Graph.},
month = jan,
keywords = {Volume Rendering, Scattering, Radiance Fields, 3D Reconstruction, Volumetric Primitives, Volumetric Representations, Ray Tracing, Inverse Rendering}
}</code></pre>
</div>
</section>
## Installation
> 💥WARNING💥
>
> The implementation of the volumetric primitives required some changes in the Mitsuba renderer ([see associated PR](https://github.com/mitsuba-renderer/mitsuba3/pull/1464)). We are in the process of including those changes in the official codebase. Until then, you will need to build the [`ellispoids_release` branch](https://github.com/mitsuba-renderer/mitsuba3/tree/ellipsoids_release) of Mitsuba to use this repository.
To install the required dependencies, run:
```bash
pip install -r requirements.txt
```
or using conda:
```bash
conda env create --file environment.yml
conda activate volprim
```
Then install the `volprim` library to your local Python environment:
```bash
pip install -e .
```
## Integrators
This repository introduces integrators that can be used to render volumetric primitives for different applications:
- `volprim_rf`: VPRF integrator described in the paper, useful for rendering 3D Gaussian Splatting -like assets.
- `volprim_prb`: VPPT integrator described in the paper, useful for render volumetric scattering media.
- `volprim_tomography`: A simple tomography integrator that only accounts for the absorption in the volume.
## Example scripts
### `render_3dg_asset.py`
This script is a simple example of how to use Mitsuba to render a 3DG asset
from the original 3D Gaussian Splatting paper datasets. Such datasets can be downloaded on the [official 3D Gaussian Splatting website](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/).
Example:
```
python .\examples\render_3dg_asset.py --ply datasets\truck\point_cloud\iteration_30000\point_cloud.ply --cameras datasets\truck\cameras.json
```
### `refine_3dg_dataset.py`

This script can be used to refine a 3DG asset to with our integrator, and different kernel. It will produce a Python asset that can later be rendered using the `render_asset.py` script.
Example:
```
python .\examples\render_3dg_asset.py --ply datasets\truck\point_cloud\iteration_30000\point_cloud.ply --cameras datasets\truck\cameras.json --images datasets\tandt_db\tandt\truck\images --output output_refine --cam_scale 0.125 --cam_count 4
```
### `render_asset.py`
This script can be used to render a Mitsuba Python asset resulting from an optimization pipeline.
Example:
```
python .\examples\render_asset.py output_refine\optimized_asset
```
### `optimize_volume.py`
<!-- <a><img height="100" src="assets/logo_inria.png"> </a> -->

This script implements a simple optimization pipeline that converts a volume grid into a set of volumetric primitives. It uses the `volprim_tomography` integrators that integrates the density of overlapping volumetric primitives along the ray, with no scattering.
The resulting optimized set of volumetric primitives can then be rendered using the `render_volume.py` script as described below.
Example:
```
python examples/optimize_volume.py --output output_tomo --volume_grid resources/smoke.vol --cam_count 4 --cam_res 128
```
### `render_volume.py`

This script can be used to render a set of volumetric primitives representing a scattering media, using the `volprim_prb` integrator.
Example:
```
python .\examples\render_asset.py output_refine\optimized_asset
```
## License
This project is MIT licensed, as found in the LICENSE.md file.
|
https://github.com/nebius/kvax
|
kvax
A FlashAttention implementation for JAX with support for efficient document mask computation and context parallelism.
Languages: Python (100.0%)
assets
assets
kvax
kvax
tests
tests
...
.flake8
.flake8
.gitignore
.gitignore
.isort.cfg
.isort.cfg
.pre-commit-config.yaml
.pre-commit-config.yaml
CONTRIBUTING.md
CONTRIBUTING.md
> README.md
# Kvax: fast and easy-to-use flash attention implementation for JAX
Kvax is an open-source library offering fast and efficient attention operations for the JAX framework. Built with [Flash Attention 2](https://arxiv.org/abs/2307.08691) algorithms implemented in the Triton language, it is optimised for high-performance attention computation with document masks and supports context parallelism. Kvax is designed to perform exceptionally well in distributed training scenarios on long sequences using FSDP/HSDP sharding.
More technical details in our blogpost: https://nebius.com/blog/posts/kvax-open-source-flash-attention-for-jax
#### Table of Contents:
- [Key Concepts of Kvax Implementation](#key-concepts-of-kvax-implementation)
- [Kvax Features](#kvax-features)
- [Kvax Results](#kvax-results)
- [How to install](#how-to-install)
- [How to use](#how-to-use)
- [Package Description](#package-description)
- [Benchmarks](#benchmarks)
- [Limitations](#limitations)
- [Contributing](#contributing)
- [Citation](#citation)
- [License](#license)
## Key Concepts of Kvax Implementation
### Document Mask Optimisation
When training transformer models on long sequences, a significant amount of compute is spent on attention operations due to the quadratic complexity of the attention algorithm. [Flash Attention algorithm](https://github.com/Dao-AILab/flash-attention) offers hardware-specific optimisations to significantly reduce latency and memory requirements for these operations.
During training on long sequences, dense packing is often used to maximise compute resource utilisation. In this approach, multiple data points are packed into a single sequence while avoiding cross-sequence attention contamination. The main idea is to calculate only the blocks of attention weights that include tokens which should attend to each other while skipping other blocks. Various methods can efficiently handle this, with [PyTorch's FlexAttention](https://pytorch.org/blog/flexattention/) being one example. Kvax takes a similar approach to achieve high performance in these scenarios.
### Context Parallelism
Using long sequences during training can also lead to high GPU memory consumption for storing layer activations. Context parallelism helps solve this problem, speeding up the computations and reducing memory required for layer activations.
There are several approaches to implementing context parallelism for transformer architectures, such as [RingAttention](https://arxiv.org/abs/2310.01889) and all-gather based method. The all-gather based method, described in the [Llama 3 training paper](https://arxiv.org/abs/2407.21783), performs an all-gather on the key and value tensors, collecting tensors before attention computation due to their lower memory requirements enabled by [GQA](https://arxiv.org/abs/2305.13245). This method is particularly well-suited for document masks, and Kvax leverages it in its implementation.
## Kvax Features
- **Block-wise Attention Masks**: Like [FlexAttention](https://pytorch.org/blog/flexattention/), our implementation builds the attention mask once per forward-backward pass, reusing it across layers. Our high-performance Triton kernel builds this mask blockwise, and does not require `O(seq_len^2)` GPU memory.
- **Optimised Memory Storage**: Kvax stores attention masks in block-wise format, requiring `3 * 4 * batch_size * seq_len // block_size * 4 bytes` (block_size is typically 64 or 128).
- **Skipping Pad Tokens**: Kvax skips blocks consisting entirely of padding tokens. See the "How to Use" section for details on defining padding tokens.
- **Context Parallelism**: Kvax balances tokens across GPUs to ensure equal attention operation loads, accounting for causal masks. This feature is described in [Llama 3 training paper](https://arxiv.org/abs/2407.21783) and fully integrates with document mask optimisations.
## Kvax Results


More details on Kvax benchmarking and its results can be found in the [blogpost](https://nebius.com/blog/posts/kvax-open-source-flash-attention-for-jax#results).
## How to install
Install the latest stable release from pip:
```bash
pip install kvax
```
**Note: The automatically installed versions of Triton and JAX-Triton might not be compatible. If you encounter an error while running the provided benchmarks, please ensure that you install compatible versions manually. For benchmarking, we used `triton==3.1` and `jax-triton==0.2.0`.**
## How to use
First, ensure that the position of every padding token is marked with `PADDING_SEGMENT_ID` in the `query_segment_ids` and `kv_segment_ids` tensors:
```python
from kvax.utils import PADDING_SEGMENT_ID
# In this example, the sequence length is 8, and there are 2 padding tokens.
pad_token_id = 128001
input_ids = [6151, 0, 52043, 710, 374, 1618, pad_token_id, pad_token_id]
query_segment_ids = [0, 0, 0, 0, 0, 0, PADDING_SEGMENT_ID, PADDING_SEGMENT_ID]
kv_segment_ids = [0, 0, 0, 0, 0, 0, PADDING_SEGMENT_ID, PADDING_SEGMENT_ID]
```
Then, kvax functions can be used in the transformer code:
```python
import flax.linen as nn
from kvax.ops import (
create_attention_mask,
flash_attention,
)
from kvax.utils import (
attention_specs,
permute_tokens_context_parallelism,
unpermute_tokens_context_parallelism,
)
class AttentionLayer(nn.Module):
def __call__(
self,
embedding,
query_positions,
query_segment_ids,
kv_positions,
kv_segment_ids,
attn_mask,
):
query, key, value = ...
scale = ...
# Call the Flash Attention op
attn_out = flash_attention(
query=query,
key=key,
value=value,
query_positions=positions,
query_segment_ids=segment_ids,
kv_positions=kv_positions,
kv_segment_ids=kv_segment_ids,
mask=attn_mask,
assume_sequential_positions=self.config.assume_sequential_positions,
scale=scale,
# Mesh is defined as a global context
# mesh=mesh,
)
out = ...
return out
class Transformer(nn.Module):
...
def setup(self):
self.attn_layers = [AttentionLayer(...) for _ in range(self.num_layers)]
self.mlp_layers = ...
def __call__(
self,
embedding,
positions,
segment_ids,
):
# During inference, create kv_positions and kv_segment_ids from positions and segment_ids
# For training they could be simply defined as:
# kv_positions, kv_segment_ids = positions, segment_ids
kv_positions, kv_segment_ids = self._maybe_cache(positions, segment_ids)
# Permute input tokens to balance load between GPUs during context_parallelism
if self._should_permute_input_tokens:
embeddings, query_positions, query_segment_ids = permute_tokens_context_parallelism(
(embeddings, positions, segment_ids),
)
# Call it once and then pass the mask into all attention blocks
attention_mask = create_attention_mask(
query_positions,
query_segment_ids,
kv_positions,
kv_segment_ids,
fwd_params=self.fa_config.fwd_params,
bwd_params=self.fa_config.bwd_params,
skip_pad_tokens=self.fa_config.skip_pad_tokens,
calc_bwd_mask=True,
# Mesh is defined as a global context
# mesh=mesh,
)
# Call transformer's layers sequentially
for attn_layer, mlp_layer in zip(self.attn_layers, self.mlp_layers):
embedding = attn_layer(
embedding,
query_positions,
query_segment_ids,
kv_positions,
kv_segment_ids,
attention_mask,
)
embedding = mlp_layer(...)
# Unpermute outputs
if self._should_permute_input_tokens:
embeddings = unpermute_tokens_context_parallelism(embeddings)
logits = ...
return logits
def training_loop(...):
...
# Define mesh as a global context and axes sharding for query, key and value.
# Can be called inside the Transformer class but before
# the first call of create_attention_mask, flash_attention,
# permute_tokens_context_parallelism or unpermute_tokens_context_parallelism.
mesh = jax.sharding.Mesh(mesh_devices, mesh_names)
with mesh, attention_specs(
query_specs=("data", "context", None, None),
kv_specs=("data", None, None, None),
):
...
logits = Transformer(...)(
embeddings,
positions,
segment_ids,
)
```
## Package Description
### Operations
#### **`flash_attention`**
The function for attention operation, based on precomputed masks and input tensor sharding specifications. Should be used within the attention_specs context manager.
**Arguments**:
- `query`: Query tensor of shape `(batch_size, query_seq_length, num_heads, head_dim)`.
- `key`: Key tensor of shape `(batch_size, kv_seq_length, num_kv_heads, head_dim)`.
- `value`: Value tensor of shape `(batch_size, kv_seq_length, num_kv_heads, head_dim)`.
- `query_positions`: A tensor of query positions with shape `(batch_size, query_seq_length)`. For sequential tokens, use `range(0, query_seq_length)`. This tensor is ignored if `assume_sequential_positions` is set to `True`.
- `query_segment_ids`: A tensor with segment IDs for the query tokens, shaped `(batch_size, query_seq_length)`. Tokens from the same sequence must share the same segment ID. Segment IDs should be in the range `(0, max(int32))`. All padding tokens should be marked with `PADDING_SEGMENT_ID`.
- `kv_positions`: A tensor of key/value positions with shape `(batch_size, kv_seq_length)`. For sequential tokens, use `range(0, kv_seq_length)`. This tensor is ignored if `assume_sequential_positions` is set to `True`.
- `kv_segment_ids`: A tensor with segment IDs for the key/value tokens, shaped `(batch_size, kv_seq_length)`. Tokens from the same sequence must share the same segment ID. Segment IDs should be in the range `(0, max(int32))`. All padding tokens should be marked with `PADDING_SEGMENT_ID`.
- `mask`: Precomputed block-wise mask from the `create_attention_mask` function.
- `scale`: Scaling factor for attention scores. Default is `1.0`.
- `fwd_params`: `FlashAttentionParamsConfig` for the forward pass. Defaults to predefined parameters for the GPU model.
- `bwd_params`: `FlashAttentionParamsConfig` for the backward pass. Defaults to predefined parameters for the GPU model.
- `assume_sequential_positions`: Assumes sequential token positions and skips loading `query_positions` and `kv_positions`. If set to `True`, the attention behaves the same as when `is_causal == True` in `jax.nn.dot_product_attention`. The default is `False`.
- `memory_optimized_gqa_backward`: Enables memory-optimised gradient computation for grouped-query attention when set to `True`. This flag affects performance, making it slower, but can save GPU memory on activations during the backward pass if it becomes a bottleneck. It may be useful for small models with long contexts. The default is `False`.
- `permute_tokens_for_load_balance`: Permutes tokens to achieve better load balancing across GPUs when set to True. Used only during context parallelism. For more details, refer to the [Llama 3 training paper](https://arxiv.org/abs/2407.21783). The default is True.
- `debug`: Prints the low-level IR of the kernel when set to `True`. The default is `False`.
- `mesh`: Device mesh configuration for distributed execution. If set to `None`, it uses the mesh from the global context. An exception is raised if `None` is provided and no mesh is available from the global context. The default is `None`.
**Returns**:
Tensor with attention-weighted values.
#### **`create_attention_mask`**
This function calculates attention masks for both forward and backward Flash Attention operations, using Triton kernels for block-wise computation.
**Arguments**:
- `query_positions`: A tensor of query positions with shape `(batch_size, query_seq_length)`. For sequential tokens, use `range(0, query_seq_length)`.
- `query_segment_ids`: A tensor with segment IDs for the query tokens, shaped `(batch_size, query_seq_length)`. Tokens from the same sequence must share the same segment ID. Segment IDs should be in the range `(0, max(int32))`. All padding tokens should be marked with `PADDING_SEGMENT_ID`.
- `kv_positions`: A tensor of key/value positions with shape `(batch_size, kv_seq_length)`. For sequential tokens, use `range(0, kv_seq_length)`.
- `kv_segment_ids`: A tensor with segment IDs for the key/value tokens, shaped `(batch_size, kv_seq_length)`. Tokens from the same sequence must share the same segment ID. Segment IDs should be in the range `(0, max(int32))`. All padding tokens should be marked with `PADDING_SEGMENT_ID`.
- `fwd_params`: `FlashAttentionParamsConfig` for the forward pass. Defaults to predefined parameters for the GPU model.
- `bwd_params`: `FlashAttentionParamsConfig` for the backward pass. Defaults to predefined parameters for the GPU model.
- `calc_bwd_mask`: Whether to calculate the attention masks for the backward pass. Default is `False`.
- `skip_pad_tokens`: Whether to skip padding tokens in calculations of attention operation. If `True`, the blocks with padding tokens only will be skipped. Defaults is `True`.
- `mesh`: Device mesh configuration for distributed execution. If set to `None`, it uses the mesh from the global context. An exception is raised if `None` is provided and no mesh is available from the global context. The default is `None`.
**Returns**:
The forward attention mask and optionally attention masks for the backward pass if `calc_bwd_mask` is `True`.
### Utilities
#### **`FlashAttentionParamsConfig`**
Dataclass that contains parameters for the Flash Attention Triton kernel. Increasing `query_block_size` and `kv_block_size` can lead to better performance but requires more streaming multiprocessor register memory on the GPU.
#### **`PADDING_SEGMENT_ID`**
Segment ID for padding tokens. This value should correspond to the position of padding tokens in the `kv_segment_ids` and `query_segment_ids` tensors. See the 'How to use' section for an example.
#### **`attention_specs`**
A context manager for setting the attention specifications for `query` and `key`/`value` tensors. All other specifications in the Kvax are calculated based on these specifications.
**Arguments**:
- `query_specs`: Specifications for sharding the `query` tensor. Specs must have 4 dimensions and provide sharding dimensions for the following axes: <br> `(batch, query_sequence, heads, attention_head_dim)`
- `kv_specs`: Specifications for sharding the `key`/`value` tensors. Specs must have 4 dimensions and provide sharding dimensions for the following axes: <br> `(batch, kv_sequence, kv_heads, attention_head_dim)`
**Notes**:
- Specs must have the same sharding dimensions for `batch` and `attention_head_dim`.
- Typical values for tensor parallelism with Mesh with axes `"data"` and `"model"`: <br>
`query_specs: ("data", None, "model", None)
kv_specs: ("data", None, "model", None)`
- Typical values for context parallelism with Mesh with axes `"data"` and `"context"`: <br>
`query_specs: ("data", "context", None, None)
kv_specs: ("data", None, None, None)`
#### **`permute_tokens_context_parallelism`**
A function to permute tokens across the sequence length `(axis==1)` to balance computation of the attention operation between GPUs for the causal mask case. For more details, please see the [Llama 3 training paper](https://arxiv.org/abs/2407.21783). For examples, please refer to the 'How to use' section.
**Arguments**:
- `inputs`: An input tensor or tuple of tensors to permute.
- `mesh`: Device mesh configuration for distributed execution. If set to `None`, it uses the mesh from the global context. An exception is raised if `None` is provided and no mesh is available from the global context. The default is `None`.
**Returns**:
Permuted tensor or tuple of tensors.
#### **`unpermute_tokens_context_parallelism`**
A function to unpermute tokens across the sequence length `(axis==1)` after the `permute_tokens_context_parallelism` function to return them to their original order. For examples, please refer to the 'How to use' section.
**Arguments**:
- `inputs`: An input tensor or tuple of tensors to unpermute.
- `mesh`: Device mesh configuration for distributed execution. If set to `None`, it uses the mesh from the global context. An exception is raised if `None` is provided and no mesh is available from the global context. The default is `None`.
**Returns**:
A tensor or tuple of tensors with tokens in their original order.
## Benchmarks
**Note**: Before benchmarking, you need to install the required dependencies. First, install the [GPU version of JAX](https://jax.readthedocs.io/en/latest/installation.html). After that, you can install required dependencies:
```bash
pip install -e .[dev]
```
**Note: The automatically installed versions of Triton and JAX-Triton might not be compatible. If you encounter an error while running the provided benchmarks, please ensure that you install compatible versions manually. For benchmarking, we used `triton==3.1` and `jax-triton==0.2.0`.**
Benchmarking CuDNN implementation vs our implementation:
```bash
# Forward with only 1 segment
python3 benchmarks.py mha
# Forward with 3 segments
python3 benchmarks.py mha --num-segments 3
# Forward+backward with 3 segments and 1000 pad tokens
python3 benchmarks.py mha_bwd --num-segments 3 --num-pad-tokens 1000
# Forward with 3 segments and 1000 pad tokens with printing attention mask
python3 benchmarks.py mha --num-segments 3 --num-pad-tokens 1000 --show-attention-mask
```
Benchmarking context vs tensor parallelism on our implementation:
```bash
# Forward with only 1 segment with token permutation enabled
python3 benchmarks.py mha_cp
# Forward+backward with 12 segments with token permutation disabled
python3 benchmarks.py mha_cp_bwd --num-segments 3 --permute-tokens-for-load-balance false
```
## Limitations
- Bias is not supported.
- Sliding window, [ALiBi](https://arxiv.org/abs/2108.12409), and custom masks are not implemented.
- Context parallelism does not support sharding across kv_sequence as in [RingAttention](https://arxiv.org/abs/2310.01889).
## Contributing
Community contributions are welcome. For more detailed information, please refer to the [contributing guidelines](CONTRIBUTING.md).
## Citation
Please cite as:
```
Skvortsov et al., "Kvax: Fast and easy-to-use Flash Attention implementation for JAX", Nebius blog, 2025.
```
BibTeX citation:
```
@article{skvortsov2025kvax,
title={Kvax: Fast and easy-to-use Flash Attention implementation for JAX},
author={Skvortsov, Sergei and Fisin, Filipp and Trofimova, Maria and Yangel, Boris},
year={2025},
journal={Nebius blog},
note={}
}
```
## License
This project is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for details.
---
© Nebius BV, 2025
|
https://github.com/tile-ai/tilelang
|
tilelang
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
Languages: C++ (61.8%), Python (36.9%), Shell (0.7%), Cython (0.2%), Cuda (0.2%), CMake (0.2%)
.github/workflows
.github/workflows
3rdparty
3rdparty
benchmark
benchmark
docker
docker
docs
docs
...
.clang-tidy
.clang-tidy
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
> README.md
<img src=./images/logo-row.svg />
<div align="center">
# Tile Language
[](https://badge.fury.io/py/tilelang)
[](https://deepwiki.com/tile-ai/tilelang) [](https://discord.gg/TUrHyJnKPG)
</div>
Tile Language (**tile-lang**) is a concise domain-specific language designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of [TVM](https://tvm.apache.org/), tile-lang allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance.
<img src=./images/MatmulExample.png />
## Latest News
- 07/04/2025 🚀: Introduced `T.gemm_sp` for 2:4 sparse tensor core support, check out [Pull Request #526](https://github.com/tile-ai/tilelang/pull/526) for details.
- 06/05/2025 ✨: Added [NVRTC Backend](https://github.com/tile-ai/tilelang/pull/461) to significantly reduce compilation time for cute templates!
- 04/14/2025 🚀: Added high-performance FlashMLA implementation for AMD MI300X, achieving performance parity with hand-optimized assembly kernels of Aiter! See [example_mla_amd](./examples/deepseek_mla/amd/README.md) for details.
- 03/03/2025 🚀: Added high-performance MLA Decoding support using only 80 lines of Python code, achieving performance on par with FlashMLA on H100 (see [example_mla_decode.py](./examples/deepseek_mla/example_mla_decode.py))! We also provide [documentation](./examples/deepseek_mla/README.md) explaining how TileLang achieves this.
- 02/15/2025 ✨: Added WebGPU Codegen support, see [Pull Request #86](https://github.com/tile-ai/tilelang/pull/86)!
- 02/12/2025 ✨: Excited to announce the release of [v0.1.0](https://github.com/tile-ai/tilelang/releases/tag/v0.1.0)!
- 02/10/2025 🚀: Added debug tools for TileLang—`T.print` for printing variables/buffers ([docs](https://tilelang.com/tutorials/debug_tools_for_tilelang.html)) and a memory layout plotter ([examples/plot_layout](./examples/plot_layout)).
- 01/20/2025 ✨: We are excited to announce that tile-lang, a dsl for high performance AI workloads, is now open source and available to the public!
## Tested Devices
Although tile-lang aims to be portable across a range of Devices, it has been specifically tested and validated on the following devices: for NVIDIA GPUs, this includes the H100 (with Auto TMA/WGMMA support), A100, V100, RTX 4090, RTX 3090, and RTX A6000; for AMD GPUs, it includes the MI250 (with Auto MatrixCore support) and the MI300X (with Async Copy support).
## OP Implementation Examples
**tile-lang** provides the building blocks to implement a wide variety of operators. Some examples include:
- [Matrix Multiplication](./examples/gemm/)
- [Dequantization GEMM](./examples/dequantize_gemm/)
- [Flash Attention](./examples/flash_attention/)
- [Flash Linear Attention](./examples/linear_attention/)
- [Flash MLA Decoding](./examples/deepseek_mla/)
- [Native Sparse Attention](./examples/deepseek_nsa/)
Within the `examples` directory, you will also find additional complex kernels—such as convolutions, forward/backward passes for FlashAttention, more operators will continuously be added.
## Benchmark Summary
TileLang achieves exceptional performance across a variety of computational patterns. Comprehensive benchmark scripts and settings are available at [tilelang-benchmark](https://github.com/tile-ai/tilelang-benchmark). Below are selected results showcasing its capabilities:
- MLA Decoding Performance on H100
<div style="display: flex; gap: 10px; justify-content: center;">
<div style="flex: 1;">
<img src="./examples/deepseek_mla/figures/bs64_float16.png" alt="mla decode performance bs64 on H100" width="100%" />
</div>
<div style="flex: 1;">
<img src="./examples/deepseek_mla/figures/bs128_float16.png" alt="mla decode performance bs128 on H100" width="100%" />
</div>
</div>
- Flash Attention Performance on H100
<div align="center"> <img src="./images/mha_performance_h100.png" alt="operator performance on H100" width=80% />
</div>
- Matmul Performance on GPUs (RTX 4090, A100, H100, MI300X)
<div>
<img src="./images/op_benchmark_consistent_gemm_fp16.png" alt="gemm fp16 performance on Gpus" />
</div>
- Dequantize Matmul Performance on A100
<div>
<img src="./images/op_benchmark_a100_wq_gemv.png" alt="dequantize gemv performance on A100" />
</div>
## Installation
### Method 1: Install with Pip
The quickest way to get started is to install the latest release from PyPI:
```bash
pip install tilelang
```
Alternatively, you can install directly from the GitHub repository:
```bash
pip install git+https://github.com/tile-ai/tilelang
```
Or install locally:
```bash
# install required system dependencies
sudo apt-get update
sudo apt-get install -y python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev
pip install -e . -v # remove -e option if you don't want to install in editable mode, -v for verbose output
```
### Method 2: Build from Source
We currently provide three ways to install **tile-lang** from source:
- [Install from Source (using your own TVM installation)](./docs/get_started/Installation.md#method-1-install-from-source-using-your-own-tvm-installation)
- [Install from Source (using the bundled TVM submodule)](./docs/get_started/Installation.md#method-2-install-from-source-using-the-bundled-tvm-submodule)
- [Install Using the Provided Script](./docs/get_started/Installation.md#method-3-install-using-the-provided-script)
### Method 3: Install with Nightly Version
For users who want access to the latest features and improvements before official releases, we provide nightly builds of **tile-lang**.
```bash
pip install tilelang -f https://tile-ai.github.io/whl/nightly/cu121/
# or pip install tilelang --find-links https://tile-ai.github.io/whl/nightly/cu121/
```
> **Note:** Nightly builds contain the most recent code changes but may be less stable than official releases. They're ideal for testing new features or if you need a specific bugfix that hasn't been released yet.
## Quick Start
In this section, you'll learn how to write and execute a straightforward GEMM (matrix multiplication) kernel using tile-lang, followed by techniques for layout optimizations, pipelining, and L2-cache–friendly swizzling.
### GEMM Example with Annotations (Layout, L2 Cache Swizzling, and Pipelining, etc.)
Below is an example that demonstrates more advanced features: layout annotation, parallelized copy, and swizzle for improved L2 cache locality. This snippet shows how to adapt your kernel to maximize performance on complex hardware.
```python
import tilelang
import tilelang.language as T
# `make_mma_swizzle_layout` is a python defined layout function
# specifically designed for for MMA operations
# which ensures the consistency with the nvidia CUTLASS Library.
# to avoid bank conflicts and maximize the performance.
from tilelang.intrinsics import (
make_mma_swizzle_layout as make_swizzle_layout,)
# add decorator @tilelang.jit if you want to return a torch function
# @tilelang.jit
def matmul(M, N, K, block_M, block_N, block_K, dtype="float16", accum_dtype="float"):
@T.prim_func
def main(
A: T.Tensor((M, K), dtype),
B: T.Tensor((K, N), dtype),
C: T.Tensor((M, N), dtype),
):
# Initialize Kernel Context
with T.Kernel(T.ceildiv(N, block_N), T.ceildiv(M, block_M), threads=128) as (bx, by):
A_shared = T.alloc_shared((block_M, block_K), dtype)
B_shared = T.alloc_shared((block_K, block_N), dtype)
C_local = T.alloc_fragment((block_M, block_N), accum_dtype)
# Apply layout optimizations or define your own layout (Optional)
# If not specified, we will deduce the layout automatically
# T.annotate_layout({
# A_shared: make_swizzle_layout(A_shared),
# B_shared: make_swizzle_layout(B_shared),
# })
# Enable rasterization for better L2 cache locality (Optional)
# T.use_swizzle(panel_size=10, enable=True)
# Clear local accumulation
T.clear(C_local)
for ko in T.Pipelined(T.ceildiv(K, block_K), num_stages=3):
# Copy tile of A
# This is a sugar syntax for parallelized copy
T.copy(A[by * block_M, ko * block_K], A_shared)
# Demonstrate parallelized copy from global to shared for B
for k, j in T.Parallel(block_K, block_N):
B_shared[k, j] = B[ko * block_K + k, bx * block_N + j]
# Perform a tile-level GEMM on the shared buffers
# Currently we dispatch to the cute/hip on Nvidia/AMD GPUs
T.gemm(A_shared, B_shared, C_local)
# Copy result back to global memory
T.copy(C_local, C[by * block_M, bx * block_N])
return main
# 1. Define the kernel (matmul) with the desired dimensions
func = matmul(1024, 1024, 1024, 128, 128, 32)
# 2. Compile the kernel into a torch function
# out_idx specifies the index of the output buffer in the argument list
# if out_idx is specified, the tensor will be created during runtime
# target currently can be "cuda" or "hip" or "cpu".
jit_kernel = tilelang.compile(func, out_idx=[2], target="cuda")
# 3. Test the kernel in Python with PyTorch data
import torch
# Create random input tensors on the GPU
a = torch.randn(1024, 1024, device="cuda", dtype=torch.float16)
b = torch.randn(1024, 1024, device="cuda", dtype=torch.float16)
# Run the kernel through the JIT-compiled function
c = jit_kernel(a, b)
# Reference multiplication using PyTorch
ref_c = a @ b
# Validate correctness
torch.testing.assert_close(c, ref_c, rtol=1e-2, atol=1e-2)
print("Kernel output matches PyTorch reference.")
# 4. Retrieve and inspect the generated CUDA source (optional)
cuda_source = jit_kernel.get_kernel_source()
print("Generated CUDA kernel:\n", cuda_source)
# 5.Pofile latency with the profiler
profiler = jit_kernel.get_profiler()
latency = profiler.do_bench()
print(f"Latency: {latency} ms")
```
### Dive Deep into TileLang Beyond GEMM
In addition to GEMM, we provide a variety of examples to showcase the versatility and power of TileLang, including:
- [Dequantize GEMM](./examples/dequantize_gemm/): Achieve high-performance dequantization by **fine-grained control over per-thread operations**, with many features now adopted as default behaviors in [BitBLAS](https://github.com/microsoft/BitBLAS), which utilizing magic layout transformation and intrins to accelerate dequantize gemm.
- [FlashAttention](./examples/flash_attention/): Enable cross-operator fusion with simple and intuitive syntax, and we also provide an example of auto tuning.
- [LinearAttention](./examples/linear_attention/): Examples include RetNet and Mamba implementations.
- [Convolution](./examples/convolution/): Implementations of Convolution with IM2Col.
## Upcoming Features
Check our [tilelang v0.2.0 release plan](https://github.com/tile-ai/tilelang/issues/79) for upcoming features.
---
TileLang has now been used in project [BitBLAS](https://github.com/microsoft/BitBLAS) and [AttentionEngine](https://github.com/microsoft/AttentionEngine).
## Join the Discussion
Welcome to join our Discord community for discussions, support, and collaboration!
[](https://discord.gg/TUrHyJnKPG)
## Acknowledgements
We would like to express our gratitude to the [TVM](https://github.com/apache/tvm) community for their invaluable contributions. The initial version of this project was mainly developed by [LeiWang1999](https://github.com/LeiWang1999), [chengyupku](https://github.com/chengyupku) and [nox-410](https://github.com/nox-410) with supervision from Prof. [Zhi Yang](https://yangzhihome.github.io) at Peking University. Part of this work was carried out during an internship at Microsoft Research, where Dr. Lingxiao Ma, Dr. Yuqing Xia, Dr. Jilong Xue, and Dr. Fan Yang offered valuable advice and support. We deeply appreciate their mentorship and contributions.
|
https://github.com/orchain/go-ethereum
|
go-ethereum
Languages: Go (89.5%), C (5.1%), JavaScript (3.3%), Assembly (0.7%), Java (0.2%), Sage (0.2%)
.github
.github
accounts
accounts
beacon
beacon
build
build
cmd
cmd
...
.dockerignore
.dockerignore
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
.golangci.yml
.golangci.yml
> README.md
## Go ORIS Smart Chain
Official Golang execution layer implementation of the Ethereum protocol.
[](https://pkg.go.dev/github.com/ethereum/go-ethereum?tab=doc)
[](https://goreportcard.com/report/github.com/ethereum/go-ethereum)
[](https://travis-ci.com/ethereum/go-ethereum)
[](https://discord.gg/nthXNEv)
Automated builds are available for stable releases and the unstable master branch. Binary
archives are published at https://geth.ethereum.org/downloads/.
## Building the source
For prerequisites and detailed build instructions please read the [Installation Instructions](https://geth.ethereum.org/docs/getting-started/installing-geth).
Building `geth` requires both a Go (version 1.19 or later) and a C compiler. You can install
them using your favourite package manager. Once the dependencies are installed, run
```shell
make geth
```
or, to build the full suite of utilities:
```shell
make all
```
## Executables
The go-ethereum project comes with several wrappers/executables found in the `cmd`
directory.
| Command | Description |
| :--------: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **`geth`** | Our main Ethereum CLI client. It is the entry point into the Ethereum network (main-, test- or private net), capable of running as a full node (default), archive node (retaining all historical state) or a light node (retrieving data live). It can be used by other processes as a gateway into the Ethereum network via JSON RPC endpoints exposed on top of HTTP, WebSocket and/or IPC transports. `geth --help` and the [CLI page](https://geth.ethereum.org/docs/fundamentals/command-line-options) for command line options. |
| `clef` | Stand-alone signing tool, which can be used as a backend signer for `geth`. |
| `devp2p` | Utilities to interact with nodes on the networking layer, without running a full blockchain. |
| `abigen` | Source code generator to convert Ethereum contract definitions into easy-to-use, compile-time type-safe Go packages. It operates on plain [Ethereum contract ABIs](https://docs.soliditylang.org/en/develop/abi-spec.html) with expanded functionality if the contract bytecode is also available. However, it also accepts Solidity source files, making development much more streamlined. Please see our [Native DApps](https://geth.ethereum.org/docs/developers/dapp-developer/native-bindings) page for details. |
| `bootnode` | Stripped down version of our Ethereum client implementation that only takes part in the network node discovery protocol, but does not run any of the higher level application protocols. It can be used as a lightweight bootstrap node to aid in finding peers in private networks. |
| `evm` | Developer utility version of the EVM (Ethereum Virtual Machine) that is capable of running bytecode snippets within a configurable environment and execution mode. Its purpose is to allow isolated, fine-grained debugging of EVM opcodes (e.g. `evm --code 60ff60ff --debug run`). |
| `rlpdump` | Developer utility tool to convert binary RLP ([Recursive Length Prefix](https://ethereum.org/en/developers/docs/data-structures-and-encoding/rlp)) dumps (data encoding used by the Ethereum protocol both network as well as consensus wise) to user-friendlier hierarchical representation (e.g. `rlpdump --hex CE0183FFFFFFC4C304050583616263`). |
## Running `geth`
Going through all the possible command line flags is out of scope here (please consult our
[CLI Wiki page](https://geth.ethereum.org/docs/fundamentals/command-line-options)),
but we've enumerated a few common parameter combos to get you up to speed quickly
on how you can run your own `geth` instance.
### Hardware Requirements
Minimum:
* CPU with 2+ cores
* 4GB RAM
* 1TB free storage space to sync the Mainnet
* 8 MBit/sec download Internet service
Recommended:
* Fast CPU with 4+ cores
* 16GB+ RAM
* High-performance SSD with at least 1TB of free space
* 25+ MBit/sec download Internet service
### Full node on the main Ethereum network
By far the most common scenario is people wanting to simply interact with the Ethereum
network: create accounts; transfer funds; deploy and interact with contracts. For this
particular use case, the user doesn't care about years-old historical data, so we can
sync quickly to the current state of the network. To do so:
```shell
$ geth console
```
This command will:
* Start `geth` in snap sync mode (default, can be changed with the `--syncmode` flag),
causing it to download more data in exchange for avoiding processing the entire history
of the Ethereum network, which is very CPU intensive.
* Start the built-in interactive [JavaScript console](https://geth.ethereum.org/docs/interacting-with-geth/javascript-console),
(via the trailing `console` subcommand) through which you can interact using [`web3` methods](https://github.com/ChainSafe/web3.js/blob/0.20.7/DOCUMENTATION.md)
(note: the `web3` version bundled within `geth` is very old, and not up to date with official docs),
as well as `geth`'s own [management APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc).
This tool is optional and if you leave it out you can always attach it to an already running
`geth` instance with `geth attach`.
### A Full node on the Görli test network
Transitioning towards developers, if you'd like to play around with creating Ethereum
contracts, you almost certainly would like to do that without any real money involved until
you get the hang of the entire system. In other words, instead of attaching to the main
network, you want to join the **test** network with your node, which is fully equivalent to
the main network, but with play-Ether only.
```shell
$ geth --goerli console
```
The `console` subcommand has the same meaning as above and is equally
useful on the testnet too.
Specifying the `--goerli` flag, however, will reconfigure your `geth` instance a bit:
* Instead of connecting to the main Ethereum network, the client will connect to the Görli
test network, which uses different P2P bootnodes, different network IDs and genesis
states.
* Instead of using the default data directory (`~/.ethereum` on Linux for example), `geth`
will nest itself one level deeper into a `goerli` subfolder (`~/.ethereum/goerli` on
Linux). Note, on OSX and Linux this also means that attaching to a running testnet node
requires the use of a custom endpoint since `geth attach` will try to attach to a
production node endpoint by default, e.g.,
`geth attach <datadir>/goerli/geth.ipc`. Windows users are not affected by
this.
*Note: Although some internal protective measures prevent transactions from
crossing over between the main network and test network, you should always
use separate accounts for play and real money. Unless you manually move
accounts, `geth` will by default correctly separate the two networks and will not make any
accounts available between them.*
### Configuration
As an alternative to passing the numerous flags to the `geth` binary, you can also pass a
configuration file via:
```shell
$ geth --config /path/to/your_config.toml
```
To get an idea of how the file should look like you can use the `dumpconfig` subcommand to
export your existing configuration:
```shell
$ geth --your-favourite-flags dumpconfig
```
*Note: This works only with `geth` v1.6.0 and above.*
#### Docker quick start
One of the quickest ways to get Ethereum up and running on your machine is by using
Docker:
```shell
docker run -d --name ethereum-node -v /Users/alice/ethereum:/root \
-p 8545:8545 -p 30303:30303 \
ethereum/client-go
```
This will start `geth` in snap-sync mode with a DB memory allowance of 1GB, as the
above command does. It will also create a persistent volume in your home directory for
saving your blockchain as well as map the default ports. There is also an `alpine` tag
available for a slim version of the image.
Do not forget `--http.addr 0.0.0.0`, if you want to access RPC from other containers
and/or hosts. By default, `geth` binds to the local interface and RPC endpoints are not
accessible from the outside.
### Programmatically interfacing `geth` nodes
As a developer, sooner rather than later you'll want to start interacting with `geth` and the
Ethereum network via your own programs and not manually through the console. To aid
this, `geth` has built-in support for a JSON-RPC based APIs ([standard APIs](https://ethereum.github.io/execution-apis/api-documentation/)
and [`geth` specific APIs](https://geth.ethereum.org/docs/interacting-with-geth/rpc)).
These can be exposed via HTTP, WebSockets and IPC (UNIX sockets on UNIX based
platforms, and named pipes on Windows).
The IPC interface is enabled by default and exposes all the APIs supported by `geth`,
whereas the HTTP and WS interfaces need to manually be enabled and only expose a
subset of APIs due to security reasons. These can be turned on/off and configured as
you'd expect.
HTTP based JSON-RPC API options:
* `--http` Enable the HTTP-RPC server
* `--http.addr` HTTP-RPC server listening interface (default: `localhost`)
* `--http.port` HTTP-RPC server listening port (default: `8545`)
* `--http.api` API's offered over the HTTP-RPC interface (default: `eth,net,web3`)
* `--http.corsdomain` Comma separated list of domains from which to accept cross origin requests (browser enforced)
* `--ws` Enable the WS-RPC server
* `--ws.addr` WS-RPC server listening interface (default: `localhost`)
* `--ws.port` WS-RPC server listening port (default: `8546`)
* `--ws.api` API's offered over the WS-RPC interface (default: `eth,net,web3`)
* `--ws.origins` Origins from which to accept WebSocket requests
* `--ipcdisable` Disable the IPC-RPC server
* `--ipcapi` API's offered over the IPC-RPC interface (default: `admin,debug,eth,miner,net,personal,txpool,web3`)
* `--ipcpath` Filename for IPC socket/pipe within the datadir (explicit paths escape it)
You'll need to use your own programming environments' capabilities (libraries, tools, etc) to
connect via HTTP, WS or IPC to a `geth` node configured with the above flags and you'll
need to speak [JSON-RPC](https://www.jsonrpc.org/specification) on all transports. You
can reuse the same connection for multiple requests!
**Note: Please understand the security implications of opening up an HTTP/WS based
transport before doing so! Hackers on the internet are actively trying to subvert
Ethereum nodes with exposed APIs! Further, all browser tabs can access locally
running web servers, so malicious web pages could try to subvert locally available
APIs!**
### Operating a private network
Maintaining your own private network is more involved as a lot of configurations taken for
granted in the official networks need to be manually set up.
#### Defining the private genesis state
First, you'll need to create the genesis state of your networks, which all nodes need to be
aware of and agree upon. This consists of a small JSON file (e.g. call it `genesis.json`):
```json
{
"config": {
"chainId": <arbitrary positive integer>,
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"londonBlock": 0
},
"alloc": {},
"coinbase": "0x0000000000000000000000000000000000000000",
"difficulty": "0x20000",
"extraData": "",
"gasLimit": "0x2fefd8",
"nonce": "0x0000000000000042",
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00"
}
```
The above fields should be fine for most purposes, although we'd recommend changing
the `nonce` to some random value so you prevent unknown remote nodes from being able
to connect to you. If you'd like to pre-fund some accounts for easier testing, create
the accounts and populate the `alloc` field with their addresses.
```json
"alloc": {
"0x0000000000000000000000000000000000000001": {
"balance": "111111111"
},
"0x0000000000000000000000000000000000000002": {
"balance": "222222222"
}
}
```
With the genesis state defined in the above JSON file, you'll need to initialize **every**
`geth` node with it prior to starting it up to ensure all blockchain parameters are correctly
set:
```shell
$ geth init path/to/genesis.json
```
#### Creating the rendezvous point
With all nodes that you want to run initialized to the desired genesis state, you'll need to
start a bootstrap node that others can use to find each other in your network and/or over
the internet. The clean way is to configure and run a dedicated bootnode:
```shell
$ bootnode --genkey=boot.key
$ bootnode --nodekey=boot.key
```
With the bootnode online, it will display an [`enode` URL](https://ethereum.org/en/developers/docs/networking-layer/network-addresses/#enode)
that other nodes can use to connect to it and exchange peer information. Make sure to
replace the displayed IP address information (most probably `[::]`) with your externally
accessible IP to get the actual `enode` URL.
*Note: You could also use a full-fledged `geth` node as a bootnode, but it's the less
recommended way.*
#### Starting up your member nodes
With the bootnode operational and externally reachable (you can try
`telnet <ip> <port>` to ensure it's indeed reachable), start every subsequent `geth`
node pointed to the bootnode for peer discovery via the `--bootnodes` flag. It will
probably also be desirable to keep the data directory of your private network separated, so
do also specify a custom `--datadir` flag.
```shell
$ geth --datadir=path/to/custom/data/folder --bootnodes=<bootnode-enode-url-from-above>
```
*Note: Since your network will be completely cut off from the main and test networks, you'll
also need to configure a miner to process transactions and create new blocks for you.*
#### Running a private miner
In a private network setting a single CPU miner instance is more than enough for
practical purposes as it can produce a stable stream of blocks at the correct intervals
without needing heavy resources (consider running on a single thread, no need for multiple
ones either). To start a `geth` instance for mining, run it with all your usual flags, extended
by:
```shell
$ geth <usual-flags> --mine --miner.threads=1 --miner.etherbase=0x0000000000000000000000000000000000000000
```
Which will start mining blocks and transactions on a single CPU thread, crediting all
proceedings to the account specified by `--miner.etherbase`. You can further tune the mining
by changing the default gas limit blocks converge to (`--miner.targetgaslimit`) and the price
transactions are accepted at (`--miner.gasprice`).
## Contribution
Thank you for considering helping out with the source code! We welcome contributions
from anyone on the internet, and are grateful for even the smallest of fixes!
If you'd like to contribute to go-ethereum, please fork, fix, commit and send a pull request
for the maintainers to review and merge into the main code base. If you wish to submit
more complex changes though, please check up with the core devs first on [our Discord Server](https://discord.gg/invite/nthXNEv)
to ensure those changes are in line with the general philosophy of the project and/or get
some early feedback which can make both your efforts much lighter as well as our review
and merge procedures quick and simple.
Please make sure your contributions adhere to our coding guidelines:
* Code must adhere to the official Go [formatting](https://golang.org/doc/effective_go.html#formatting)
guidelines (i.e. uses [gofmt](https://golang.org/cmd/gofmt/)).
* Code must be documented adhering to the official Go [commentary](https://golang.org/doc/effective_go.html#commentary)
guidelines.
* Pull requests need to be based on and opened against the `master` branch.
* Commit messages should be prefixed with the package(s) they modify.
* E.g. "eth, rpc: make trace configs optional"
Please see the [Developers' Guide](https://geth.ethereum.org/docs/developers/geth-developer/dev-guide)
for more details on configuring your environment, managing project dependencies, and
testing procedures.
### Contributing to geth.ethereum.org
For contributions to the [go-ethereum website](https://geth.ethereum.org), please checkout and raise pull requests against the `website` branch.
For more detailed instructions please see the `website` branch [README](https://github.com/ethereum/go-ethereum/tree/website#readme) or the
[contributing](https://geth.ethereum.org/docs/developers/geth-developer/contributing) page of the website.
## License
The go-ethereum library (i.e. all code outside of the `cmd` directory) is licensed under the
[GNU Lesser General Public License v3.0](https://www.gnu.org/licenses/lgpl-3.0.en.html),
also included in our repository in the `COPYING.LESSER` file.
The go-ethereum binaries (i.e. all code inside of the `cmd` directory) are licensed under the
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html), also
included in our repository in the `COPYING` file.
|
https://github.com/JR-account/SimdMSM
|
SimdMSM
SimdMSM: SIMD-accelerated Multi-Scalar Multiplication Framework for zkSNARKs
Languages: C++ (54.4%), C (27.6%), Assembly (11.0%), Python (3.6%), Java (1.3%), CMake (0.8%)
AVX-MSM
AVX-MSM
AVX-ZK
AVX-ZK
jsnark
jsnark
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
> README.md
# SimdMSM
This source code is an efficient implemetation of MSM and zkSNARK using AVX-512IFMA. It is the artifact of the paper [**SimdMSM: SIMD-accelerated Multi-Scalar Multiplication Framework for zkSNARKs**](https://tches.iacr.org/index.php/TCHES/article/view/12061) accepted to [TCHES 2025](https://ches.iacr.org/2025/).
## Overview
There are three subfolders included in this repository:
- `AVX-MSM` : the MSM implementation instantiated with AVX512-IFMA engine based on [the RELIC library](https://github.com/relic-toolkit/relic). The specific implementation code can be found in the `AVX-MSM/demo/381/` directory. The AVX512-IFMA engine implementation is based on [Cheng et al.’s work](https://github.com/ulhaocheng/avxcsidh?tab=readme-ov-file).
- `AVX-ZK` : integrating AVX-MSM implementation into [the libsnark library](https://github.com/scipr-lab/libsnark). The part of `r1cs_gg_ppzksnark`, commonly known as the famous Groth16 protocol, is changed to using new AVX-MSM.
- `jsnark` : a tool for evaluating the performance of AVX-ZK under different real-world workloads.
## Requirement
### For AVX-MSM
- Ubuntu 22.04.4
- gcc version 11.4.0
- cmake 3.22.1
- support AVX512-IFMA instruction sets
- pandas(for python script)
### For AVX-ZK
```
$ sudo apt-get install build-essential cmake git libgmp3-dev libprocps3-dev python-markdown libboost-all-dev libssl-dev
```
### For jsnark
You can consult instructions in [jsnark](https://github.com/akosba/jsnark).
- JDK 8 (Higher versions are also expected to work)
- Junit 4
- BouncyCastle library
## Build instructions
## AVX-MSM
### Building
Target the `SimdMSM` library.
```shell
$ cd AVX-MSM
$ cd demo/381/
$ make lib
```
If you encounter the error `../../../preset/x64-pbc-bls12-381.sh: not found`, try the following two commands:
```shell
$ chmod +x ../../preset/x64-pbc-bls12-381.sh
$ sed -i 's/\r$//' ../../preset/x64-pbc-bls12-381.sh
```
### Using
Run AVX-MSM. The benchmark's data size `WNUM` and window size `WMBITS` can be modified in the file `/test/test_pip_ifma.c`.
```shell
$ mkdir build
$ make ifma
$ ./build/test_pip_ifma
```
Run AVX-pair-MSM. The benchmark's data size `WNUM` and window size `WMBITS` can be modified in the file `/test/test_pair_ifma.c`.
```shell
$ make pair_ifma
$ ./build/test_pair_ifma
```
Run AVX-MSM(muti-threads). The benchmark's data size `WNUM` and window size `WMBITS` can be modified in the file `/test/test_pip_threads.c`.
```shell
$ make thread
$ ./build/test_pip_threads
```
Run AVX-pair-MSM(muti-threads). The benchmark's data size `WNUM` and window size `WMBITS` can be modified in the file `/test/test_pair_threads.c`.
```shell
$ make pair_thread
$ ./build/test_pair_threads
```
You can also use the Python script to perform batch benching.
```shell
$ mkdir build
$ python bench.py
```
### Output example
The output structure of AVX-MSM, AVX-pair-MSM, AVX-MSM (multi-threads), and AVX-pair-MSM (multi-threads) is generally similar. Here, I'll use AVX-MSM as an example to describe its output structure.
The three macros `WNUM`, `WMBITS`, and `NBENCHS` in the test file represent the multi-scalar multiplication scale, window size, and number of benchmark iterations, respectively.
``` c
Pippenger_old=0.790256 // the execution time of the original Pippenger
Pippenger_ifma=0.325606 // the execution time of our AVX-MSM (in seconds)
YES // the computation result is correct
```
The output of `bench.py` is as follows: the first column represents the multi-scalar multiplication scale, followed by the window size, the execution time of the original Pippenger, the execution time of our AVX-MSM, and the speedup between the two.
```
[15, 6, 0.057, 0.019, 3.0]
[15, 7, 0.059, 0.019, 3.1052631578947367]
[15, 8, 0.058, 0.018, 3.2222222222222228]
[15, 9, 0.034, 0.014, 2.428571428571429]
[15, 10, 0.037, 0.017, 2.176470588235294]
[15, 11, 0.038, 0.02, 1.9]
[15, 12, 0.056, 0.027, 2.074074074074074]
[15, 13, 0.05, 0.034, 1.4705882352941175]
Best: [15, 9, 0.034, 0.014, 2.428571428571429] // this is the best window size
```
## AVX-ZK
### Building
Generate static link library `libmsm.a`.
```shell
$ cd AVX-MSM/demo/381
$ make msm
```
Cmake and create the Makefile:
```shell
$ cd AVX-ZK
$ mkdir build && cd build && cmake ..
```
Copy the `libmsm.a` and `librelic_s.a`.to AVX-ZK/build/depends/libff/libff.
```shell
$ cp ../../AVX-MSM/demo/381/build/libmsm.a ../../AVX-MSM/demo/381/target/lib/librelic_s.a ./depends/libff/libff
```
Then, to compile the library, run this within the `build` directory:
```shell
$ make
```
### Using
Run the profiling of AVX-ZK.
```shell
$ make profile_r1cs_gg_ppzksnark
$ ./libsnark/profile_r1cs_gg_ppzksnark 65536 8192 bytes
```
You can also use the Python script to perform batch benching.
```shell
$ cd AVX-ZK
$ python bench.py
```
### Output example
The output format of AVX-ZK follows the format of the `libsnark` library. Below is an example of the output from the python script:
```c
// 15 means size of 2^15; True means result is correct
// 1.2709s is the execution time of our AVX-ZK
[15, True, '[1.2709s x0.97]\t(19.6462s x1.00 from start)']
```
### Switching between single and multi-core
In file `SimdMSM/AVX-ZK/libsnark/zk_proof_systems/ppzksnark/r1cs_gg_ppzksnark/r1cs_gg_ppzksnark.tcc`, the proof generation function is `r1cs_gg_ppzksnark_prover`. Specifically, functions containing `multi_exp` are responsible for multi-scalar multiplication. You can modify their template parameters to enable multi-threading or not.
```c++
//single-core
multi_exp_method_pip_avx
multi_exp_method_pair_avx
//multi-core
multi_exp_method_pip_avx_threads
multi_exp_method_pair_avx_threads
```
Specifically, in the proof generation function, replace `multi_exp_method_pip_avx` with `multi_exp_method_pip_avx_threads` in the computation of evaluation_At, evaluation_Ht, and evaluation_Lt. For the computation of evaluation_Bt, replace `multi_exp_method_pair_avx` with `multi_exp_method_pair_avx_threads`. After modifying the code, repeat the above Building and Using steps in the AVX-ZK part.
## Running and Testing AVX-ZK by JsnarkCircuitBuilder
### Building
Return to the main directory `SimdMSM/`. The first part is similar to AVX-ZK.
```shell
$ cd jsnark/libsnark
$ mkdir build && cd build && cmake ..
```
Copy the `libmsm.a` and `librelic_s.a`.to libsnark/build/depends/libff/libff. Then build.
```shell
$ cp ../../../AVX-MSM/demo/381/build/libmsm.a ../../../AVX-MSM/demo/381/target/lib/librelic_s.a ./depends/libff/libff
$ make
```
To compile the JsnarkCircuitBuilder project via command line, from the `SimdMSM/jsnark` directory:
```shell
$ cd jsnark
$ cd JsnarkCircuitBuilder
$ mkdir -p bin
$ javac -d bin -cp /usr/share/java/junit4.jar:bcprov-jdk15on-159.jar $(find ./src/* | grep ".java$")
```
### Using
Run AES.
```shell
$ java -cp bin examples.generators.blockciphers.AES128CipherCircuitGenerator
```
Run SHA-256.
```shell
$ java -cp bin examples.generators.hash.SHA2CircuitGenerator
```
Run RSAEnc.
```shell
$ java -cp bin examples.generators.rsa.RSAEncryptionCircuitGenerator
```
Run Merkle-Tree.
```shell
$ java -cp bin examples.generators.hash.MerkleTreeMembershipCircuitGenerator
```
Run RSASigVer.
```shell
$ java -cp bin examples.generators.rsa.RSASigVerCircuitGenerator
```
Run Auction.
```shell
$ java -cp bin examples.generators.augmenter.AugmentedAuctionCircuitGenerator
```
|
https://github.com/Sahibzada-A/Singularity-Research
|
Singularity-Research
A novel LLM architecture written in highly optimized low-level C++/CUDA with a new Long-Term Memory (LTM) mechanism for large context windows.
Languages: Cuda (63.0%), Python (18.5%), C++ (11.9%), CMake (6.6%)
docs
docs
include
include
python_bindings
python_bindings
src
src
tests
tests
...
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
CONTRIBUTING.md
CONTRIBUTING.md
LICENSE
LICENSE
Obsidian_Memory_Transformers.pdf
Obsidian_Memory_Transformers.pdf
> README.md
# Obsidian Memory Transformer
A novel LLM architecture written in highly optimized low-level C++/CUDA with a new Long-Term Memory (LTM) mechanism for large context windows. This is a high-performance implementation of a Transformer model with long-term memory capabilities, inspired by Google's Titan architecture. This project provides efficient CUDA implementations of FlashAttention and memory-augmented Transformer blocks, along with Python bindings for easy integration.
## Features
- **Long-term Memory**: Novel memory mechanism for handling extended context windows efficiently
- **FlashAttention**: Memory-efficient attention implementation with minimal memory access
- **High Performance**:
- Optimized CUDA kernels
- Mixed precision training (FP16/BF16)
- Quantization support (INT8/INT4)
- Fused operations for better throughput
- **Distributed Training**:
- Data parallelism
- Tensor parallelism
- Pipeline parallelism
- Multi-node support via MPI
- **Python Integration**:
- HuggingFace-compatible interface
- Easy-to-use training API
- Efficient inference engine
## Installation
### Prerequisites
- CUDA Toolkit (>= 11.0)
- CMake (>= 3.15)
- C++17 compatible compiler
- Python (>= 3.7)
- PyTorch (>= 1.9.0)
### Installing from PyPI
```bash
pip install ltm-transformer
```
### Building from Source
1. Clone the repository:
```bash
git clone https://github.com/singularityresearch/ltm-transformer.git
cd ltm-transformer
```
2. Install Python dependencies:
```bash
pip install -r requirements.txt
```
3. Build and install:
```bash
mkdir build && cd build
cmake ..
make -j$(nproc)
make install
```
## Quick Start
### Python
```python
from ltm import TitanModel, TitanConfig, InferenceEngine
# Initialize model
config = TitanConfig(
hidden_size=768,
num_attention_heads=12,
memory_slots=512,
use_flash_attention=True
)
model = TitanModel(config)
# Training
from ltm import Trainer, TrainingArguments
trainer = Trainer(
model=model,
args=TrainingArguments(
output_dir="./outputs",
learning_rate=5e-5,
per_device_train_batch_size=8,
gradient_accumulation_steps=4
),
train_dataset=dataset
)
trainer.train()
# Inference
engine = InferenceEngine(
model=model,
config=InferenceConfig(
use_flash_attention=True,
use_memory_cache=True,
max_sequence_length=2048
)
)
output = engine.generate(
input_ids=tokenizer.encode("Hello, how are"),
max_new_tokens=50
)
```
### C++
```cpp
#include "ltm/transformer/titan_inspired_block.cuh"
// Configure model
ltm::transformer::TitanBlockConfig config;
config.hidden_dim = 768;
config.num_heads = 12;
config.memory_slots = 512;
config.use_flash_attention = true;
// Create model
auto model = std::make_unique<ltm::transformer::TitanBlock<float>>(config);
// Run inference
torch::Tensor input = /* ... */;
auto output = model->forward(input);
```
## Architecture
The LTM Transformer extends the standard Transformer architecture with:
1. **Memory Bank**: A trainable matrix storing compressed representations of past context
2. **Compression Gate**: Mechanism for compressing and storing relevant information
3. **Memory Attention**: Efficient attention between current context and memory bank
4. **FlashAttention**: Memory-efficient attention implementation
For detailed architecture information, see [docs/design/architecture.md](docs/design/architecture.md).
## Performance
### Memory Usage
| Context Length | Standard Transformer | LTM Transformer |
|---------------|---------------------|-----------------|
| 2K tokens | 4 GB | 2 GB |
| 8K tokens | 64 GB | 4 GB |
| 32K tokens | 1024 GB | 8 GB |
### Training Speed
- 1.5x faster training compared to standard Transformers
- 4x reduction in memory bandwidth usage
- Linear scaling up to 64 GPUs
For detailed benchmarks, see [docs/performance/optimization.md](docs/performance/optimization.md).
## Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
### Development Setup
1. Install development dependencies:
```bash
pip install -r requirements-dev.txt
```
2. Build with testing enabled:
```bash
mkdir build && cd build
cmake -DBUILD_TESTING=ON ..
make -j$(nproc)
```
3. Run tests:
```bash
ctest --output-on-failure
```
## Citation
If you use this work in your research, please cite:
```bibtex
@article{allahyar2025ltm,
title={LTM Transformer: Long-term Memory Transformer with Titan-inspired Architecture},
author={Allahyar, Sahibzada},
journal= https://github.com/Sahibzada-A/Obsidian-Memory-Transformer,
year={2025}
}
```
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Google's Titan architecture for inspiration
- FlashAttention paper for efficient attention implementation
- HuggingFace team for transformer implementations
- NVIDIA for CUDA optimization guidelines
## Contact
- Sahibzada A - sahibzada@singularityresearchlabs.com
- Project Link: https://github.com/Sahibzada-A/Obsidian-Memory-Transformer
|
https://github.com/fpganinja/taxi
|
taxi
AXI, AXI stream, Ethernet, and PCIe components in System Verilog
Languages: SystemVerilog (45.7%), Python (23.1%), Tcl (21.0%), Makefile (10.1%)
.github/workflows
.github/workflows
docs
docs
src
src
...
.gitignore
.gitignore
.readthedocs.yaml
.readthedocs.yaml
.test_durations
.test_durations
AUTHORS
AUTHORS
LICENSE
LICENSE
> README.md
# Taxi Transport Library
[](https://github.com/fpganinja/taxi/actions/workflows/regression-tests.yml)
AXI, AXI stream, Ethernet, and PCIe components in System Verilog.
GitHub repository: https://github.com/fpganinja/taxi
Documentation: https://docs.taxi.fpga.ninja/
## Introduction
The goal of the Taxi transport library is to provide a set of performant, easy-to-use building blocks in modern System Verilog facilitating data transport and interfacing, both internally via AXI and AXI stream, and externally via Ethernet, PCI express, UART, and I2C. The building blocks are accompanied by testbenches and simulation models utilizing Cocotb and Verilator.
This library is currently under development; more components will be added over time as they are developed.
## License
Taxi is provided by FPGA Ninja, LLC under either the CERN Open Hardware Licence Version 2 - Strongly Reciprocal (CERN-OHL-S 2.0), or a paid commercial license. Contact info@fpga.ninja for commercial use. Note that some components may be provided under less restrictive licenses (e.g. example designs).
Under the strongly-reciprocal CERN OHL, you must provide the source code of the entire digital design upon request, including all modifications, extensions, and customizations, such that the design can be rebuilt. If this is not an acceptable restriction for your product, please contact info@fpga.ninja to inquire about a commercial license without this requirement. License fees support the continued development and maintenance of this project and related projects.
To facilitate the dual-license model, contributions to the project can only be accepted under a contributor license agreement.
## Components
* AXI
* SV interface for AXI
* Register slice
* Single-port RAM
* AXI lite
* SV interface for AXI lite
* Register slice
* Single-port RAM
* Dual-port RAM
* AXI stream
* SV interface for AXI stream
* Register slice
* Width converter
* Synchronous FIFO
* Asynchronous FIFO
* Combined FIFO + width converter
* Combined async FIFO + width converter
* Multiplexer
* Broadcaster
* COBS encoder
* COBS decoder
* Pipeline register
* Pipeline FIFO
* Ethernet
* 10/100 MII MAC
* 10/100 MII MAC + FIFO
* 10/100/1000 GMII MAC
* 10/100/1000 GMII MAC + FIFO
* 10/100/1000 RGMII MAC
* 10/100/1000 RGMII MAC + FIFO
* 1G MAC
* 1G MAC + FIFO
* 10G/25G MAC
* 10G/25G MAC + FIFO
* 10G/25G MAC/PHY
* 10G/25G MAC/PHY + FIFO
* 10G/25G PHY
* MII PHY interface
* GMII PHY interface
* RGMII PHY interface
* 10G/25G MAC/PHY/GT wrapper for UltraScale/UltraScale+
* General input/output
* Switch debouncer
* LED shift register driver
* Generic IDDR
* Generic ODDR
* Source-synchronous DDR input
* Source-synchronous DDR differential input
* Source-synchronous DDR output
* Source-synchronous DDR differential output
* Source-synchronous SDR input
* Source-synchronous SDR differential input
* Source-synchronous SDR output
* Source-synchronous SDR differential output
* Linear-feedback shift register
* Parametrizable combinatorial LFSR/CRC module
* CRC computation module
* PRBS generator
* PRBS checker
* LFSR self-synchronizing scrambler
* LFSR self-synchronizing descrambler
* Low-speed serial
* I2C master
* I2C single register
* MDIO master
* UART
* Primitives
* Arbiter
* Priority encoder
* Precision Time Protocol (PTP)
* PTP clock
* PTP CDC
* PTP period output
* PTP TD leaf clock
* PTP TD PHC
* PTP TD relative-to-ToD converter
* Statistics collection subsystem
* Statistics collector
* Statistics counter
* Synchronization primitives
* Reset synchronizer
* Signal synchronizer
* Extensible FPGA control protocol (XFCP)
* XFCP UART interface
* XFCP AXI module
* XFCP AXI lite module
* XFCP I2C master module
* XFCP switch
## Example designs
Example designs are provided for several different FPGA boards, showcasing many of the capabilities of this library. Building the example designs will require the appropriate vendor toolchain and may also require tool and IP licenses.
* Alpha Data ADM-PCIE-9V3 (Xilinx Virtex UltraScale+ XCVU3P)
* BittWare XUSP3S (Xilinx Virtex UltraScale XCVU095)
* BittWare XUP-P3R (Xilinx Virtex UltraScale+ XCVU9P)
* Cisco Nexus K35-S/ExaNIC X10 (Xilinx Kintex UltraScale XCKU035)
* Cisco Nexus K3P-S/ExaNIC X25 (Xilinx Kintex UltraScale+ XCKU3P)
* Cisco Nexus K3P-Q/ExaNIC X100 (Xilinx Kintex UltraScale+ XCKU3P)
* Digilent Arty A7 (Xilinx Artix 7 XC7A35T)
* HiTech Global HTG-940 (Xilinx Virtex UltraScale+ XCVU9P/XCVU13P)
* Silicom fb2CG@KU15P (Xilinx Kintex UltraScale+ XCKU15P)
* Xilinx Alveo U45N/SN1000 (Xilinx Virtex UltraScale+ XCU26)
* Xilinx Alveo U50 (Xilinx Virtex UltraScale+ XCU50)
* Xilinx Alveo U55C (Xilinx Virtex UltraScale+ XCU55C)
* Xilinx Alveo U55N/Varium C1100 (Xilinx Virtex UltraScale+ XCU55N)
* Xilinx Alveo U200 (Xilinx Virtex UltraScale+ XCU200)
* Xilinx Alveo U250 (Xilinx Virtex UltraScale+ XCU250)
* Xilinx Alveo U280 (Xilinx Virtex UltraScale+ XCU280)
* Xilinx Alveo X3/X3522 (Xilinx Virtex UltraScale+ XCUX35)
* Xilinx KC705 (Xilinx Kintex 7 XC7K325T)
* Xilinx KCU105 (Xilinx Kintex UltraScale XCKU040)
* Xilinx Kria KR260 (Xilinx Kria K26 SoM / Zynq UltraScale+ XCK26)
* Xilinx VCU108 (Xilinx Virtex UltraScale XCVU095)
* Xilinx VCU118 (Xilinx Virtex UltraScale+ XCVU9P)
* Xilinx VCU1525 (Xilinx Virtex UltraScale+ XCVU9P)
* Xilinx ZCU102 (Xilinx Zynq UltraScale+ XCZU9EG)
* Xilinx ZCU106 (Xilinx Zynq UltraScale+ XCZU7EV)
* Xilinx ZCU111 (Xilinx Zynq UltraScale+ XCZU28DR)
## Testing
Running the included testbenches requires [cocotb](https://github.com/cocotb/cocotb), [cocotbext-axi](https://github.com/alexforencich/cocotbext-axi), [cocotbext-eth](https://github.com/alexforencich/cocotbext-eth), [cocotbext-uart](https://github.com/alexforencich/cocotbext-uart), [cocotbext-pcie](https://github.com/alexforencich/cocotbext-pcie), and [Verilator](https://www.veripool.org/verilator/). The testbenches can be run with pytest directly (requires [cocotb-test](https://github.com/themperek/cocotb-test)), pytest via tox, or via cocotb makefiles.
|
https://github.com/andrewkchan/deepseek.cpp
|
deepseek.cpp
CPU inference for the DeepSeek family of large language models in C++
Languages: C++ (84.3%), Python (12.5%), C (2.3%), Makefile (0.9%)
src
src
vendor
vendor
...
.gitignore
.gitignore
LICENSE.md
LICENSE.md
Makefile
Makefile
README.md
README.md
convert.py
convert.py
> README.md
This is an CPU-only inference implementation for the DeepSeek family of large language models written in C++, based on [Yet Another Language Model](https://github.com/andrewkchan/yalm).
## Why?
For fun and learning!
I was initially adding DeepSeek support to `yalm` but realized that the changes were large and complex enough that it might ruin the simplicity of that project. Maybe at some point I'll upstream the changes, but for now I've decided to fork them into a separate, smaller, leaner codebase.
Since this program only supports DeepSeek, it's tiny compared to other inference engines (<2k LOC not including `fmt` and `json`, vs. >250k for llama.cpp and vllm) and is extra hackable. I'm currently using it as a testbed to study single-batch DeepSeek decoding performance on CPU.
## Model and hardware support
Quantizations other than FP32 require AVX2 and F16C support.
| Model | Q2_K | Q3_K | Q4_K | F8E5M2 | F8E4M3 | FP16 | BF16 | FP32 |
| ----- | ---- | ---- | ------ | ------ | ---- | ---- | ---- | ---- |
| DeepSeek-V2-Lite | ✅ | ✅ | WIP | ✅ | WIP | ✅ | WIP | ✅ |
| DeepSeek-V2 | ✅ | ✅ | WIP | ✅ | WIP | ✅ | WIP | ✅ |
| DeepSeek-V2.5 | ✅ | ✅ | WIP | ✅ | WIP | ✅ | WIP | ✅ |
| DeepSeek-V3 | ✅ | ✅ | WIP | ✅ | WIP | - | - | - |
| DeepSeek-R1 | ✅ | ✅ | WIP | ✅ | WIP | - | - | - |
deepseek.cpp is missing important optimizations for production use (see notes below), but gets pretty close to llama.cpp in single-batch decode speed. Benchmarking DeepSeek-V3-Base with Q2_K quantization on an AWS r6a.12xlarge instance (AMD EPYC 7R13, 2x24 cores, 384GB DDR4 RAM):
- llama.cpp ([DeepSeek-V3-Q2_K_XS](https://huggingface.co/unsloth/DeepSeek-V3-GGUF/tree/main/DeepSeek-V3-Q2_K_XS) 207GB, tg128, best of 16/24/32/48 threads): 4.57 tok/s
- deepseek.cpp (Q2_K 207GB, MHA, `-n 128 -L` completion with 16 threads): 4.02 tok/s
A big part of this is that deepseek.cpp uses the llama.cpp vec_dot kernels for Q2_K, so I can't claim to have matched its performance purely through my own ingenuity. But it is surprising given the inference code is much simpler, opting for OpenMP over a [global threadpool with spinlock kernel barriers](https://justine.lol/matmul/#threads). I'm hoping that in addition to serving as a testbed for myself, this gives a good base for others to hack on.
# Instructions
deepseek.cpp requires a computer with a C++20-compatible compiler. You'll also need a directory containing LLM safetensor weights and configuration files in huggingface format, which you'll need to convert by providing a directory into which `.dseek` files containing the converted weights will go. Follow the below to download DeepSeek-V2-Lite, build `deepseek.cpp`, and run it:
```
# install git LFS and build tools
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get -y install git-lfs python3-dev build-essential
# download DeepSeek-V2-Lite
git clone https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite
# clone this repository
git clone https://github.com/andrewkchan/deepseek.cpp.git
cd deepseek.cpp
pip install .
python convert.py --quant fp16 v2-lite-f16 ../DeepSeek-V2-Lite/
./build/main v2-lite-f16 -i "What is a large language model?" -m c -t 1.0
```
## Usage
See the CLI help documentation below for `./build/main`:
```
Usage: main <checkpoint_dir> [options]
Example: main model_weights_dir/ -i "Q: What is the meaning of life?"
Options:
-h Display this help message
-L Locks model weights to RAM, disabling swap. Requires sudo.
-m [completion,passkey,perplexity,interactive] which mode to run in (default - completion)
-T <int> sliding window context length (0 - max)
Perplexity mode options:
Choose one:
-i <string> input prompt
-f <filepath> input file with prompt
-w use wikitext as input
Completion mode options:
-n <int> number of steps to run for in completion mode, default 256. 0 = max_seq_len, -1 = infinite
Choose one:
-i <string> input prompt
-t <float> temperature (default - 1.0)
-p <float> p for top-p sampling (default - 0.95)
-f <filepath> input file with prompt
Passkey mode options:
-n <int> number of junk lines to insert (default - 250)
-l <int> passkey position (-1 - random)
```
You will likely need to tune the number of OpenMP threads to achieve good performance. For example:
```
OMP_NUM_THREADS=32 ./build/main <...args>
```
The default OpenMP thread count can result in severely degraded throughput, likely due to thread contention. I have found a good heuristic to be half the number of cores.
## Notes
- `--quant=f8e5m2` specifies model weight quantization using 128x128 blocks. MoE gates and layer norms are left in full precision. This should provide better accuracy than per-tensor quantization or the naive truncating quantization done by `yalm` (which results in nonsensical output for the DeepSeek family of models).
- `--quant=q2_k` and `--quant=q3_k` specify model weight quantization using the 2-bit and 3-bit llama.cpp [K-quantization schemes](https://github.com/ggml-org/llama.cpp/pull/1684), which use a two-level hierarchy of blocks and super-blocks to store scales/biases for ranges of weights.
- The models have a tendency to repeat themselves and get into infinite loops at lower temperatures. In my testing, a temperature of ~1.0 avoids this failure mode but also keeps the models reasonably grounded.
- Some new, optional architectural features (e.g. the `noaux_tc` method of expert selection) of DeepSeek V3 have not yet been implemented, so the model accuracy may be lower than the reference model.
- You will need ~650GB of memory to run DeepSeek V3 in F8E5M2, or 206GB for 2-bit Q2_K. For best performance, you should ensure there is enough physical RAM available and run as `sudo` with `-L` to force weights to stay in RAM, but otherwise, most operating systems will also automatically supplement this with swap space (storing some memory on disk and some in RAM) at the cost of severely degraded token throughput. More aggressive quantization methods such as [1.58-bit](https://unsloth.ai/blog/deepseekr1-dynamic) are planned.
- Model quality is not stable because I've been using this repository as an experiment testbed. See (https://github.com/andrewkchan/deepseek.cpp/pull/14) for the latest perplexity measurements on DeepSeek-V2-Lite as well as instructions on how to run standard measurements yourself. Known issues impacting generation quality include the tokenizer (which is not a true BPE tokenizer) and the use of attention sinks rather than yarn (https://github.com/andrewkchan/deepseek.cpp/pull/15).
- Only decoding (e.g. incremental, iterative generation or reading of one token at a time) has been implemented. Prefills (reading a batch of prompt tokens in a single pass) have not been implemented, nor prefill-based optimizations for the decoding phase such as speculative decoding or multi-token prediction. Finally, the current multi-latent attention implementation is still slower than multi-latent attention in surprising scenarios (https://github.com/andrewkchan/deepseek.cpp/pull/8) and appears to be under-utilizing memory bandwidth. I have limited time to implement these optimizations as this is a side project for me, but PRs are welcome!
|
https://github.com/0xNikilite/oboromi
|
oboromi
a proof-of-concept Nintendo Switch 2 emulator.
Languages: Rust (100.0%)
.github
.github
assets
assets
benchmarks
benchmarks
docs
docs
examples
examples
...
.gitignore
.gitignore
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
LICENSE
LICENSE
README.md
README.md
> README.md
<p align="center">
<img width="32%" height="32%" src="https://github.com/user-attachments/assets/2cf6431e-e9a5-4f03-98ce-d8c975ddde77" alt="oboromi logo"/>
</p>
<p align="center">
<a href="https://github.com/0xNikilite/oboromi/blob/main/LICENSE"><img alt="License" src="https://img.shields.io/badge/license-MPL%202.0-blue.svg?style=flat"></a>
<a href="https://discord.gg/g9sehj8bPz"><img alt="Discord" src="https://img.shields.io/discord/1387476383663390732?style=flat&label=Discord&color=5865F2&logo=discord&logoColor=white"></a>
</p>
<h4 align="center">(◕‿◕) Join our Discord here 🢰</h4>
<h1 align="center">oboromi</h1>
<h4 align="center">a proof-of-concept Nintendo Switch 2 emulator written in Rust</h4>
## Overview
**oboromi** is a modular and work-in-progress emulator for the upcoming Nintendo Switch 2. It's built in Rust and focuses on correctness, clarity, and traceability rather than performance at this stage. The current implementation includes a functioning CPU core, a memory management unit (MMU) with basic paging, and a custom memory subsystem.
> [!IMPORTANT]
> oboromi is **not** yet playable and does not emulate any commercial firmware or games.
## Features
### AArch64 CPU Core
- Clean interpreter with structured instruction decoding
- Implemented instructions:
- Arithmetic: `ADD`, `SUB`, `ADDI`, `SUBI`
- Bitwise: `AND`, `ORR`, `EOR`, `MVN`
- Comparison & logic: `CMP`, `TST`
- Branching: `B`, `RET`
- Memory: `LDR`, `STR`
- Others: `NOP`, `MOV`
- Fully handles NZCV flags (condition codes)
- Optional instruction tracing with feature flag `trace`
### Memory Management Unit (MMU)
- Virtual to physical address translation via simple page table
- 4 KiB paging with TLB support (64 entries)
- Page faults and access violations are logged
- Mapping utility functions for identity and custom regions
### Memory Subsystem
- Custom memory backend with:
- Region registration
- Bounds-checked access
- Load/store abstraction for 32-bit and 64-bit values
- Endianness-aware access
### Testing & Examples
- Functional testing via `main.rs`, gated behind a button in the GUI
- Examples to demonstrate step-by-step usage (`examples/` coming soon)
## GUI (via `eframe`)
- Built-in GUI based on `egui`
- Always included and launched by default
- Provides:
- Partial memory viewer
- Manual test runner (button-controlled)
## How to Run
```shell
git clone https://github.com/0xNikilite/oboromi
cd oboromi
cargo run
````
## Contributing
Pull requests are welcome! Feel free to fork the repo, open issues, or suggest improvements.
## 📜 License
This project is licensed under the **Mozilla Public License 2.0**.
See [LICENSE](LICENSE) for details.
---
#### Useful Links
* [Rust Lang](https://www.rust-lang.org/)
* [AArch64 ISA Reference](https://developer.arm.com/documentation/ddi0602/latest/)
* [egui](https://github.com/emilk/egui)
---
> [!WARNING]
> oboromi is **not affiliated with Nintendo**. This project does not contain or support any copyrighted firmware,
BIOS, or ROMs.
|
https://github.com/CyberSecurityUP/Offensive-Windows-Drivers-Development
|
Offensive-Windows-Drivers-Development
Languages: C (94.6%), C++ (5.4%)
Introduction/Project
Introduction/Project
PrivilegeEscalation/GetSystem
PrivilegeEscalation/GetSystem
Ransomware/Project1
Ransomware/Project1
...
README.md
README.md
Resources.md
Resources.md
> README.md
# Offensive-Windows-Drivers-Development
## Overview
**Offensive-Windows-Drivers-Development** is a research project designed to explore the development of Windows kernel-mode and user-mode drivers for offensive security purposes. The project focuses on techniques for low-level interaction with the Windows operating system, including file system interception, process manipulation, and advanced memory operations.
The goal is to provide insights into Windows internals and practical implementations that can aid red teamers, penetration testers and researchers in understanding how kernel-mode and user-mode drivers can be used in offensive scenarios, while also emphasizing the importance of defensive mechanisms to counter such techniques.
## Features
- **File System Interception**: Monitor and modify file I/O operations.
- **File Encryption**: Implement AES-based encryption at the kernel level.
- **Process Injection**: Advanced techniques for process manipulation from kernel space.
- **EDR Evasion**: Techniques for bypassing endpoint detection and response (EDR) solutions.
- **Memory Operations**: Direct manipulation of memory at the kernel level.
- **Proof-of-Concept (PoC) Drivers**: Examples for educational purposes.
## Prerequisites
- **Operating System**: Windows 10/11 (x64) with a kernel debugger (e.g., WinDbg).
- **Development Environment**: Visual Studio with Windows Driver Kit (WDK).
- **Tools**:
- [WinDbg](https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/)
- [Process Hacker](https://processhacker.sourceforge.io/)
- [Sysinternals Suite](https://learn.microsoft.com/en-us/sysinternals/)
## References
https://www.blackhat.com/docs/eu-17/materials/eu-17-Corina-Difuzzing-Android-Kernel-Drivers.pdf
https://voidsec.com/windows-drivers-reverse-engineering-methodology/
https://github.com/koutto/ioctlbf
https://github.com/otavioarj/SIOCTLBF
https://v1k1ngfr.github.io/winkernel-reverse-ida-ghidra/
https://infosecwriteups.com/understanding-ioctls-for-windows-vulnerability-research-exploit-development-c49229b38d8d
https://guidedhacking.com/threads/how-to-find-vulnerable-drivers-with-ioctlance.20824/
https://exploitreversing.com/2024/01/03/exploiting-reversing-er-series-article-02/
https://www.cyberark.com/resources/threat-research-blog/inglourious-drivers-a-journey-of-finding-vulnerabilities-in-drivers
https://www.cyberark.com/resources/threat-research-blog/finding-bugs-in-windows-drivers-part-1-wdm
https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/
https://blogs.vmware.com/security/2023/10/hunting-vulnerable-kernel-drivers.html
https://www.unknowncheats.me/forum/general-programming-and-reversing/461976-methodology-static-reverse-engineering-windows-kernel-drivers.html
https://www.youtube.com/watch?v=7Trgnw7HkeE&ab_channel=OffByOneSecurity
https://www.youtube.com/watch?v=ViWLMfSwGVA&ab_channel=OALabs
https://www.youtube.com/watch?v=cabuolISweY&ab_channel=NirLichtman
|
https://github.com/armosec/curing
|
curing
io_uring based rootkit
Languages: Go (86.0%), C (12.1%), Makefile (1.9%)
.vscode
.vscode
build
build
cmd
cmd
io_uring_example
io_uring_example
pkg
pkg
...
.gitignore
.gitignore
Makefile
Makefile
README.md
README.md
go.mod
go.mod
go.sum
go.sum
> README.md
# Curing 💊
Curing is a POC of a rootkit that uses `io_uring` to perform different tasks without using any syscalls, making it invisible to security tools which are only monitoring syscalls.
The project was found effective against many of the most popular security tools such as Linux EDRs solutions and container security tools.
The idea was born at the latest CCC conference #38c3, therefor the name `Curing` which is a mix of `C` and `io_uring`.
To read the full article, check the [blog post](https://www.armosec.io/blog/io_uring-rootkit-bypasses-linux-security).
## POC
You can find a full demo of bypassing Falco with `curing` [here](poc/POC.md).
In the POC, you will also find the commands to build and run the `curing` client and server.
## Proving 0 syscalls
To prove that the rootkit is not using any syscalls, you can use the following command:
```bash
strace -f -o /tmp/strace.log ./build/client
```
0 syscalls is of course not possible, but the idea is to prove that the rootkit is not using any syscalls that are related to the attack, only the `io_uring` syscalls are used.
## How it works
The `curing` client is connecting to the `curing` server and is pulling commands from the server to execute. The server is sending commands to the client to read files, write files, create symbolic links, etc. The client is using `io_uring` to execute the commands and send the results back to the server.
Because the client is using `io_uring`, it is not using any syscalls that are related to the attack, making it invisible to security tools that are monitoring syscalls.
To know more about `io_uring`, you can check the [official documentation](https://kernel.dk/io_uring.pdf).
## Features
- [x] Read files
- [x] Write files
- [x] Create symbolic links
- [x] C2 server communication
- [ ] Execute processes ([blocked](https://github.com/axboe/liburing/discussions/1307))
- [ ] Any other feature from [here](https://github.com/axboe/liburing/blob/1a780b1fa6009fe9eb14dc48a99f6917556a8f3b/src/include/liburing/io_uring.h#L206)
## io_uring quick start
If you just want to play around with `io_uring` and test the security tool near to your house, you can use the example [here](io_uring_example/README.md).
## Requirements
- Linux kernel 5.1 or later
## Disclaimer
This project is a POC and should not be used for malicious purposes. The project is created to show how `io_uring` can be used to bypass security tools which are relying on syscalls.
We are not responsible for any kind of abuse of this project.
|
https://github.com/JOKOSAHS/DMA-Pcileech
|
DMA-Pcileech
DMA-Pcileech-AX200
Languages: Verilog (80.3%), Tcl (12.7%), SystemVerilog (7.0%)
src
src
...
README.md
README.md
pcie_7x
pcie_7x
vivado_build.tcl
vivado_build.tcl
vivado_build_100t.tcl
vivado_build_100t.tcl
vivado_generate_project_100t.tcl
vivado_generate_project_100t.tcl
> README.md
## **💡 Current Status**
The **network card firmware** has been patched by **ACE** and is no longer functional in its original state.
The purpose of open-sourcing this firmware is to **accelerate learning** and provide insights into this project.
By sharing this resource, we aim to foster a deeper understanding and enable developers to explore and innovate further.
Remember, this is for educational and research purposes—let's build a better community together!
## **🔧 Focus Areas**
Our primary focus is on **media cards**, **sound cards**, and **hard drives**.
Due to the nature of these devices, we have decided **not to open-source them**.
If you are interested in discussing these technologies or joining our community, feel free to connect with us on **Discord**:
## ⚠️ About TLP Interrupts
The TLP interrupt mechanism requires proper echoing with the computer motherboard. This project was developed specifically for **ASUS motherboards**, ensuring compatibility. However, many individuals who have **stolen this project** fail to adapt the kernel interrupts for other systems. As a result, users might experience issues such as **blue screens** or other errors due to these unaddressed compatibility problems. These individuals often mislead users into switching to specific motherboards instead of resolving the underlying issues, highlighting their lack of technical expertise.
While **network card firmware technology is outdated**, some developers continue to sell it at **high prices**, exploiting users who may not know better. Our decision to open-source this technology has disrupted many fraudulent developers, leading to retaliation instead of constructive improvements on their part. We believe that true developers should focus on **learning, innovating**, and solving compatibility challenges rather than deceiving customers or charging unreasonable fees.
## ⚠️ Important Update on Network Card Firmware
ACE has now marked all **Intel** and **Realtek** series network cards. In the future, **network card firmware** will be fully detected. Scammers who are exploiting our open-source technology will soon be exposed.
### Why We Open-Sourced This Project
The primary purpose of open-sourcing this project was to **counter the exploitation of our work**. By making the technology publicly available, we ensure that **malicious users** cannot hide behind our creations and resell them unlawfully.
We will continue to monitor and update the firmware to stay ahead of these attempts.
Thank you for your continued support.
**Purpose of Open Source**: Following a leak by a customer, the firmware became widely distributed and resold. To address this, it has been made public.
Note that once exposed, anti-cheat systems will likely detect the firmware's space, which may limit its future usability.
We are not working with any developers and any mention of us is false!
## Caution Against Scammers
Currently, there are many fake users cloning this open-source library and using our information to deceive others.
**Devices for Full Emulation**: Killer series network cards.
**New Update**: We have modified a new IP core to support the detection of Delta games.
**Want to share your insights or seek assistance?**
> ⚠️ **Note to Malicious Attackers and Troublemakers**
> Please refrain from joining our group. We do not welcome individuals who intend to misuse our free resources for resale purposes. Such members will be removed from the community
### Open Access to Network Card DMA Firmware
ACE has recently restricted many RTL-type devices, including network card DMA firmware. Importantly, this technology has become **publicly accessible**, allowing anyone with basic technical knowledge to quickly learn and create it. As a result, prices for these firmware solutions remain relatively affordable, generally within the **100-180 USD** range. This applies to both Killer cards and other models, so prices should not vary significantly.
### Recognizing False Claims and High-Price Tactics
Some individuals may attempt to mislead new players by claiming that open-source network card devices, often with additional modifications, are exclusive "internal" products. They may also assert that their versions are unique or private.
- **Unique Firmware:** ACE is likely to soon gather data on all such devices. Each firmware version requires unique encoding, ensuring distinct versions for each user.
- **Open and Accessible Technology:** With the right emulation skills, anyone can achieve stability and reliability in these devices.
There is no "private" firmware—only **thousands of lines of code** accessible to those who seek it.
### Scam Alert
If you’ve paid **300 USD** for network card emulation firmware, there’s a strong chance you’ve been overcharged, as this technology is now widely accessible.
**Devices for Full Emulation**: Killer series network cards.
|
https://github.com/NVIDIA-RTX/RTXPT
|
RTXPT
Real-time path tracing library and sample
Languages: HLSL (51.9%), C++ (44.8%), C (2.2%)
Docs
Docs
External
External
Rtxpt
Rtxpt
Support
Support
...
.gitignore
.gitignore
.gitmodules
.gitmodules
CLA.txt
CLA.txt
CMakeLists.txt
CMakeLists.txt
LICENSE.txt
LICENSE.txt
> README.md
# RTX Path Tracing v1.6.0

## Overview
RTX Path Tracing is a code sample that strives to embody years of ray tracing and neural graphics research and experience. It is intended as a starting point for a path tracer integration, as a reference for various integrated SDKs, and/or for learning and experimentation.
The base path tracing implementation derives from NVIDIA’s [Falcor Research Path Tracer](https://github.com/NVIDIAGameWorks/Falcor), ported to approachable C++/HLSL [Donut framework](https://github.com/NVIDIAGameWorks/donut).
GTC presentation [How to Build a Real-time Path Tracer](https://www.nvidia.com/gtc/session-catalog/?tab.catalogallsessionstab=16566177511100015Kus&search.industry=option_1559593201839#/session/1666651593475001NN25) provides a high level introduction to most of the features.
## Features
* DirectX 12 and Vulkan back-ends
* Reference and real-time modes
* Simple BSDF model that is easy(ish) to extend
* Simple asset pipeline based on glTF 2.0 (support for a subset of glTF extensions including animation)
* Volumes and nested dielectrics with priority
* Support for analytic lights (directional, spot, point), emissive triangles and environment map lighting
* NEE lighting with feedback-based, temporaly adaptive importance sampling
* Path tracing features such as: Low-discrepancy sample generator based on [Practical Hash-based Owen Scrambling](https://jcgt.org/published/0009/04/01/paper.pdf), use of [RayCones](https://research.nvidia.com/publication/2021-04_improved-shader-and-texture-level-detail-using-ray-cones) for texture MIP selection, RR early ray termination, firefly filter and similar
* Basic post-processing features such as: TAA, tone mapping, bloom and similar
* Reference mode 'photo-mode screenshot' with simple [OptiX denoiser](https://developer.nvidia.com/optix-denoiser) integration
* [Shader Execution Reordering](https://developer.nvidia.com/blog/improve-shader-performance-and-in-game-frame-rates-with-shader-execution-reordering/) for significant increase in execution performance
* [RTXDI](https://github.com/NVIDIA-RTX/RTXDI) integration for ReSTIR DI (light importance sampling) and and ReSTIR GI (indirect lighting)
* [OMM](https://github.com/NVIDIA-RTX/OMM) integration for fast ray traced alpha testing
* [NRD](https://github.com/NVIDIA-RTX/NRD) ReLAX and ReBLUR denoiser integration with up to 3-layer path space decomposition
* [RTXTF](https://github.com/NVIDIA-RTX/RTXTF) integration for Stochastic Texture Filtering
* [Streamline](https://github.com/NVIDIAGameWorks/Streamline/) integration for DLSS 4.0 (DLSS RR, DLSS SR, DLSS AA, DLSS FG & MFG)
## Requirements
- Windows 10 20H1 (version 2004-10.0.19041) or newer
- DXR Capable GPU (DirectX Raytracing 1.1 API, or higher)
- GeForce Game Ready Driver 576.52 or newer
- DirectX 12 or Vulkan API
- CMake v3.14+
- Visual Studio 2022 (v143 build tools) or later with Windows 10 SDK version 10.0.20348.0 or 10.0.26100.0 or later
## Known Issues
* Enabling Vulkan support requires a couple of manual steps, see [below](#building-vulkan)
* SER, OMM and Streamline support on Vulkan is currently work in progress
* Running Vulkan on AMD GPUs may trigger a TDR during TLAS building in scenes with null TLAS instances
* We recommend using *NVIDIA Nsight Graphics* graphics for frame capture and analysis. If using other GPU performance tuning and debugging tools such as *PIX on Windows*, it is advisable to disable NVRHI_WITH_NVAPI and DONUT_WITH_STREAMLINE variables in CMake to avoid compatibility issues. Please note: disabling these settings results in lower performance and missing features
* There is a known issue resulting in LIVE_DEVICE DirectX warnings reported at shutdown when Streamline is enabled in Debug builds
* There is a known issue resulting in black or incorrect transparencies/reflection on some AMD systems with latest drivers; this is being investigated
## Folder Structure
| | |
| - | - |
| /bin | default CMake folder for binaries and compiled shaders
| /build | default CMake folder for build files
| /Assets | models, textures, scene files
| /Docs | documentation
| /External | external libraries and SDKs, including Donut, Streamline, NRD, RTXDI, and OMM
| /Support | optional command line tools (denoiser, texture compressor, etc)
| /Rtxpt | **RTX Path Tracing core; Sample.cpp/.h/.hlsl contain entry points**
| /Rtxpt/PathTracer | **Core path tracing shaders**
## Build
At the moment, only Windows builds are fully supported. We are going to add Linux support in the future.
1. Clone the repository **with all submodules recursively**:
`git clone --recursive https://github.com/NVIDIA-RTX/RTXPT.git`
2. Use CMake to configure the build and generate the project files.
```
cd RTXPT
cmake CMakeLists.txt -B ./build
```
Use `-G "some tested VS version"` if specific Visual Studio or other environment version required. Make sure the x64 platform is used.
3. Build the solution generated by CMake in the `./build/` folder.
In example, if using Visual Studio, open the generated solution `build/RTXPathTracing.sln` and build it.
4. Select and run the `Rtxpt` project. Binaries get built to the `bin` folder. Assets/media are loaded from `Assets` folder.
If making a binary build, the `Assets` and `Support` folders can be placed into `bin` next to executable and packed up together (i.e. the sample app will search for both `Assets/` and `../Assets/`).
## Building Vulkan
Due to interaction with various included libraries, Vulkan support is not enabled by default and needs a couple of additional tweaks on the user side; please find the recommended steps below:
* Install Vulkan SDK (we tested with VulkanSDK-1.3.290.0) and clear CMake cache (if applicable) to make sure the correct dxc.exe path from Vulkan SDK is set for SPIRV compilation
* Set DONUT_WITH_VULKAN and NVRHI_WITH_VULKAN CMake variables to ON. DXC_SPIRV_PATH should already have automatically picked up the location of the DXC compiler in the Vulkan SDK during config; if not, please set it manually
* Disable streamline integration by setting DONUT_WITH_STREAMLINE CMake variable to OFF
* To run with Vulkan use `--vk` command line parameter
## DirectX 12 Agility SDK
RTX PT optionally integrates [DirectX 12 Agility SDK](https://devblogs.microsoft.com/directx/directx12agility/). If RTXPT_DOWNLOAD_AND_ENABLE_AGILITY_SDK CMake variable is set to TRUE, the version 717-preview will be automatically downloaded via CMake script and required build variables will be set. If different version is required, please set correct RTXPT_D3D_AGILITY_SDK_PATH and RTXPT_D3D_AGILITY_SDK_VERSION.
Version 717-preview enables native DirectX support for [Shader Execution Reordering](https://devblogs.microsoft.com/directx/ser/). For testing this on Nvidia hardware, a preview driver is required and can be downloaded from https://developer.nvidia.com/downloads/shadermodel6-9-preview-driver
## User Interface
Once the application is running, most of the SDK features can be accessed via the UI window on the left hand side and drop-down controls in the top-center.

Camera can be moved using W/S/A/D keys and rotated by dragging with the left mouse cursor.
## Command Line
- `--scene` loads a specific .scene.json file; example: `--scene programmer-art.scene.json`
- `--width` and `--height` to set the window size; example: `--width 3840 --height 2160`
- `--fullscreen` to start in full screen mode; example: `--width 3840 --height 2160 --fullscreen`
- `--debug` to enable the graphics API debug layer or runtime, and additional validation layers.
- `--vk` to enable Vulkan (see [building-vulkan](#building-vulkan))
## Developer Documentation
We are working on more detailed SDK developer documentation - watch this space!
## Contact
RTX Path Tracing is under active development. Please report any issues directly through GitHub issue tracker, and for any information, suggestions or general requests please feel free to contact us at pathtracing-sdk-support@nvidia.com!
## Thanks
Many thanks to the developers of the following open-source libraries or projects that make this project possible:
* dear imgui (https://github.com/ocornut/imgui)
* DirectX Shader Compiler (https://github.com/microsoft/DirectXShaderCompiler)
* cgltf, Single-file glTF 2.0 loader (https://github.com/jkuhlmann/cgltf)
* Krzysztof Narkowicz's Real-time BC6H compression on GPU (https://github.com/knarkowicz/GPURealTimeBC6H)
* okdshin's https://github.com/okdshin/PicoSHA2
* ...and any we might have forgotten (please let us know) :)
## Citation
If you use RTX Path Tracing in a research project leading to a publication, please cite the project.
The BibTex entry is
```bibtex
@online{RTXPT,
title = {{{NVIDIA}}\textregistered{} {RTX Path Tracing}},
author = {{NVIDIA}},
year = 2023,
url = {https://github.com/NVIDIA-RTX/RTXPT},
urldate = {2024-01-26},
}
```
## License
See [LICENSE.txt](LICENSE.txt)
This project includes NVAPI software. All uses of NVAPI software are governed by the license terms specified here: https://github.com/NVIDIA/nvapi/blob/main/License.txt.
|
https://github.com/ading2210/doompdf
|
doompdf
A port of Doom (1993) that runs inside a PDF file
Languages:
.github/workflows
.github/workflows
.vscode
.vscode
doomgeneric
doomgeneric
web
web
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
build.sh
build.sh
embed_file.py
embed_file.py
> README.md
# DoomPDF
This is a Doom source port that runs inside a PDF file.
Play it here: [doom.pdf](https://doompdf.pages.dev/doom.pdf)
https://github.com/user-attachments/assets/0b39de6d-a53a-4494-8eba-3d16e7431e3a
> [!IMPORTANT]
> Any crypto things claiming to be related to me or this project are fake. See https://bsky.app/profile/ading.dev/post/3lfyhqifjls2p
## Javascript in a PDF
You might expect PDF files to only be comprised of static documents, but surprisingly, the PDF file format supports Javascript with its own separate standard library. Modern browsers (Chromium, Firefox) implement this as part of their PDF engines. However, the APIs that are available in the browser are much more limited.
The full specfication for the JS in PDFs was only ever implemented by Adobe Acrobat, and it contains some ridiculous things like the ability to do [3D rendering](https://opensource.adobe.com/dc-acrobat-sdk-docs/library/jsapiref/JS_API_AcroJS.html#annot3d), make [HTTP requests](https://opensource.adobe.com/dc-acrobat-sdk-docs/library/jsapiref/JS_API_AcroJS.html#net-http), and [detect every monitor connected to the user's system](https://opensource.adobe.com/dc-acrobat-sdk-docs/library/jsapiref/JS_API_AcroJS.html#monitor). However, on Chromium and other browsers, only a tiny amount of this API surface was implemented, due to obvious security concerns. With this, we can do whatever computation we want, just with some very limited IO.
## Porting Doom
C code can be compiled to run within a PDF using and old version of Emscripten that targets [asm.js](https://en.wikipedia.org/wiki/Asm.js) instead of WebAssembly. Then, all that's needed is a way to get key inputs, and a framebuffer for the output. Inputs are fairly straightforward, since Chromium's PDF engine supports text fields and buttons. Getting a good looking and fast enough framebuffer is a lot more of a challenge though.
Previous interactive PDF projects I've seen use individual text fields that are toggled on/off to make individual pixels. However, Doom's resolution is 320x200 which would mean thousands of text fields would have to be toggled every frame, which is infeasible. Instead, this port uses a separate text field for each row in the screen, then it sets their contents to various ASCII characters. I managed to get a 6 color monochrome output this way, which is enough for things to be legible in-game. The performance of this method is pretty poor but playable, since updating all of that text takes around 80ms per frame.
I also implemented a scrolling text console using 25 stacked text fields. The stdout stream from Emscripten is redirected to there. This let me debug a lot easier because otherwise there is no console logging method available (the proper `console.println` is unimplemented in Chrome).
There's also a feature to insert custom WAD files into the PDF. You can go to https://doompdf.pages.dev/, select your WADs, and download a newly generated PDF file with those WADs preloaded.
## Build Instructions
Clone this repository and run the following commands:
```
python3 -m venv .venv
source .venv/bin/activate
pip3 install -r requirements.txt
env CFLAGS=-O3 ./build.sh
```
The `build.sh` script will download Emscripten `1.39.20` automatically. You must be on Linux to build this.
The generated files will be in the `out/` directory. Then you can run `(cd out; python3 -m http.server)` to serve the files on a web server.
## Credits
This port is made by [@ading2210](https://github.com/ading2210/).
Forked from [doomgeneric](https://github.com/ozkl/doomgeneric).
Inspired by [horrifying-pdf-experiments](https://github.com/osnr/horrifying-pdf-experiments) and [pdftris](https://github.com/ThomasRinsma/pdftris).
## License
This repository is licensed under the GNU GPL v2.
```
ading2210/doompdf - Doom running inside a PDF file
Copyright (C) 2025 ading2210
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
```
|
https://github.com/NVIDIA-RTX/RTXNTC
|
RTXNTC
NVIDIA Neural Texture Compression SDK
Languages: C++ (49.2%), C (39.6%), Python (7.5%), CMake (2.1%), HLSL (1.5%), Batchfile (0.1%)
assets
assets
docs
docs
external
external
libraries
libraries
samples/renderer
samples/renderer
...
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
ChangeLog.md
ChangeLog.md
> README.md
# RTX Neural Texture Compression (NTC) SDK v0.7.1 BETA
[Quick Start Guide](#quick-start-guide)
## Introduction
Neural Texture Compression (NTC) is an algorithm designed to compress all PBR textures used for a single material together. It works best when the texture channels are correlated with each other, for example, detail in the albedo texture corresponds to detail in the normal texture. Up to 16 texture channels can be compressed into one NTC texture set. Typical PBR materials have 9-10 channels: 3x albedo, 3x normal, metalness, roughness, ambient occlusion, opacity.
During compression, the original texture data is transformed into a combination of weights for a small neural network (decoder) and a tensor of latents or features that are sampled and passed through the decoder to reconstruct the texture colors, as is illustrated below. The sampling and decoding processes are fast enough to use them directly in the shaders that normally sample the material textures, such as base pass pixel shaders or ray tracing hit shaders. However, the decoder produces unfiltered data for only one texel, and in order to get filtered textures, we suggest using NTC in combination with [Stochastic Texture Filtering (STF)](https://github.com/NVIDIA-RTX/RTXTF). For renderers targeting lower-end hardware, we suggest implementing the "Inference on Load" mode where NTC textures are decompressed when the game or map is loaded, and transcoded to one of the block-compressed formats (BCn) at the same time. There is also an advanced "Inference on Feedback" mode that uses Sampler Feedback to find the set of texture tiles needed to render the current view and then decompresses only those tiles, storing them in a sparse tiled texture as BCn.

For more background information, please refer to the [Random-Access Neural Compression of Material Textures](https://research.nvidia.com/labs/rtr/neural_texture_compression/) page on the NVIDIA Research website.
### Example Compression Rates
NTC can be thought of as an *adjustable quality/constant bitrate lossy compression scheme*. This means that it will attempt to reconstruct the input images with minimal error while using a fixed amount of data specified as a compression time target. However, unlike block compression schemes (which have a fixed data rate for a given format) the per-texel memory footprint of a NTC-compressed texture bundle will vary based in the specified *Latent Shape* (which is the composite of the number of high- and low-resolution latent channels, the bit-depth of those high- and low-resolution channels, and the scale factor between them). Each latent shape corresponds to a given per-texel bitrate, and thus for a desired compression level a compatible latent shape can be selected which corresponds to desired bitrate as closely as possible. Furthermore, although NTC compression requires specifying a latent shape (and thus bitrate) it is possible to approximate a *constant quality/variable bitrate* approach by performing pre-analysis of the bundle to determine what formats are required to achieve a target quality level. See [Adaptive Compression](docs/SettingsAndQuality.md#adaptive-compression) for more.
To demonstrate how NTC compares to other methods, consider a material defined by the following bundle of material textures:
* Albedo Color (RGB)
* Normal (XY)
* Roughness
* Metalness
* Ambient Occlusion Factor

*Example material from [MetalPlates013 from AmbientCG](https://ambientcg.com/view?id=MetalPlates013)*
Assuming 8 bits per channel and optimal channel packing, this corresponds to a bitrate of 64 bits/texel. In a contemporary pipeline this might be block-compressed into one BC7 texture for Albedo (8 bits/texel), one BC5 texture for normals (4 bits/texel), and a third BC7 texture for Roughness, Metalness, and Ambient Occlusion packed as separate channels (for another 8 bits/texel). For NTC we have found that many real-world texture bundles of this format can be compressed with results comparable to BCn (a PSNR of 40 to 50 dB) with a latent shape requiring about 3 bits/texel.
If we assume a 2k-by-2k texture resolution (and ignore the mip chains) we can compute the texture footprint of the bundle at various points in the data pipeline:
| Bundle Compression | Disk Size | PCI-E Traffic | VRAM Size |
|:-------------------|----------:|--------------:|----------:|
| Raw Image | 32.00 MB | 32.00 MB | 32.00 MB |
| BCn Compressed | 10.00 MB | 10.00 MB | 10.00 MB |
| NTC-on-Load* | 1.52 MB | 1.52 MB | 10.00 MB |
| NTC-on-Sample | 1.52 MB | 1.52 MB | 1.52 MB |
*: Assumes transcoding to equivalent BCn formats at decompression time.
See the [Settings and Quality Guide](docs/SettingsAndQuality.md) to learn more about various NTC settings and how they affect compression ratios and image quality.
### Cooperative Vector and Inference
Decompressing texels with NTC requires reading the latent data corresponding to a given texture coordinate and then performing an *inference* operation by running it through a small Multi-Layer Perceptron (MLP) network whose weights are determined during compression and stored as part of the compressed bundle. While this operation is modest relative to the massive networks employed by many other deep learning applications it still carries a significant computational cost relative to the average pixel shader commonly seen in 3D rendering applications. Fortunately, NTC is able to benefit from new [Cooporative Vector](https://registry.khronos.org/vulkan/specs/latest/man/html/VK_NV_cooperative_vector.html) extensions for Vulkan and Direct3D 12 which allow pixel shaders to leverage the same hardware acceleration used in large network inference. On Ada- and Blackwell-class GPUs this provides a 2-4x improvement in inference throughput over competing optimal implementations that do not utilize these new extensions.
In order to provide robust backwards compatibility, fallback implementations of the inference code using the `DP4a` instructions or regular integer math have also been provided. This will allow for the decompression code to be executed reliably on any platform that supports at least Direct3D 12 Shader Model 6; however, there will be substantial performance improvements on newer GPUs. See [System Requirements](#system-requirements) for more details.
> ### WARNING: Pre-Release Feature Dependency for Direct3D 12
> NTC texture decompression for DX12 applications, both on-load and on-sample, relies on a preview version of the [Microsoft DirectX 12 Agility SDK](https://devblogs.microsoft.com/directx/directx12agility/), specifically, `1.717.x-preview`. In order for the [Cooperative Vector](https://devblogs.microsoft.com/directx/cooperative-vector/) extensions to work, the application must enable the `D3D12ExperimentalShaderModels` and `D3D12CooperativeVectorExperiment` features, which require that Windows is configured to be in the Developer Mode.
>
> A pre-release NVIDIA GPU driver version 590.26 or later is required for Shader Model 6.9 functionality.
>
> All non-CoopVec versions of DX12 decompression, as well as all Vulkan versions including CoopVec, are OK to use for shipping.
>
> **The DX12 Cooperative Vector support is for testing purposes only. DO NOT SHIP ANY PRODUCTS USING IT.**
## Quick Start Guide
See the [Build Guide](#build-guide) for instructions on compiling the SDK.
To experiment with how Neural Texture Compression performs with different latent shapes and texture bundles, follow the [NTC Explorer Guide](docs/Explorer.md). A few sample material sets have been provided in this package under the `./assets/materials` folder.
To see how NTC works on a sample 3D scene follow the instructions in the [NTC Renderer Guide](docs/Renderer.md) for pre-processing a GLTF scene. A sample scene has been provided under the `./assets/models` folder.
## SDK Contents
| Component | Description | Source code |
|-----------|-------------|-------------|
| [LibNTC](https://github.com/NVIDIA-RTX/RTXNTC-Library) | Library that implements neural texture compression, decompression, and BCn encoding | See Component
| [`ntc-cli`](docs/CommandLineTool.md) | Command-line tool for texture compression and decompression | [tools/cli](tools/cli)
| [NTC Explorer](docs/Explorer.md) | App for interactive experimentation with neural texture compression and a viewer for NTC files | [tools/explorer](tools/explorer)
| [NTC Renderer](docs/Renderer.md) | Sample app that demonstrates how to render a GLTF model using NTC materials | [samples/renderer](samples/renderer)
| [BCTest](docs/BCTest.md) | Test app for evaluating the performance and quality of BCn encoders | [support/tests/bctest](support/tests/bctest)
| [`ntc.py`](libraries/ntc.py) | Python module for developing automation scripts that process materials using `ntc-cli` | See Component
| [`test.py`](support/tests/test.py) | Script for basic functional testing of texture compression and decompression | See Component
| [Materials](assets/materials) | Example materials for the CLI tool and Explorer
| [FlightHelmet model](assets/models) | Example model for the Renderer sample
## System Requirements
Operating System:
- Windows 10/11 x64
- Linux x64
Graphics APIs:
- DirectX 12 - with preview Agility SDK for Cooperative Vector support
- Vulkan 1.3
GPU for NTC decompression on load and transcoding to BCn:
- Minimum: Anything compatible with Shader Model 6 [*]
- Recommended: NVIDIA Turing (RTX 2000 series) and newer.
GPU for NTC inference on sample:
- Minimum: Anything compatible with Shader Model 6 (will be functional but very slow) [*]
- Recommended: NVIDIA Ada (RTX 4000 series) and newer.
GPU for NTC compression:
- Minimum: NVIDIA Turing (RTX 2000 series).
- Recommended: NVIDIA Ada (RTX 4000 series) and newer.
_[*] The oldest GPUs that the NTC SDK functionality has been validated on are NVIDIA GTX 1000 series, AMD Radeon RX 6000 series, Intel Arc A series._
For Cooperative Vector support on NVIDIA GPUs, please use the NVIDIA Graphics Driver preview version 590.26 or newer for DX12, or at least version 570 for Vulkan. The preview drivers can be downloaded using the following links (require an NVIDIA Developer Program account):
- GeForce GPUs: https://developer.nvidia.com/downloads/shadermodel6-9-preview-driver
- Quadro GPUs: https://developer.nvidia.com/downloads/assets/secure/shadermodel6-9-preview-driver-quadro
For a list of software components needed to build the SDK, please refer to the [Build Guide](##Building NTC SDK).
## Known Issues
The following issues are observed with NVIDIA Display Driver 590.26:
- Cooperative Vector inference (both on-load and on-sample) using INT8 math is slower than expected on DX12 (bug 5341486)
## Build Guide
NTC SDK supports Windows x64 and Linux x64 targets.
### Windows x64
Building the NTC SDK on Windows requires the following components:
- Visual Studio 2022 (at least the build tools)
- [Windows SDK](https://developer.microsoft.com/en-us/windows/downloads/windows-sdk) (tested with 10.0.26100.0)
- [CMake](https://cmake.org/download) (tested with v3.28 and v3.31)
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads) (tested with v12.8 and v12.9)
Follow the usual way of building CMake projects on Windows:
- Clone the project recursively:
```sh
git clone --recursive https://github.com/NVIDIA-RTX/RTXNTC.git
```
- Building using the "x64 Native Tools Command Prompt for VS 2022":
```sh
cd RTXNTC
mkdir build
cd build
cmake ..
cmake --build .
```
- Building using CMake GUI:
* Set "Where is the source code" to the `RTXNTC` folder
* Set "Where to build the binaries" to `RTXNTC/build`
* Configure a solution using `Visual Studio 2022` tools for the `x64` platform.
* Generate and open the solution in Visual Studio.
* Build.
Visual Studio Code with CMake Tools extension and Ninja build system works fine, too.
### Linux x64
Building the NTC SDK on Linux requires the following components:
- C++ compiler (tested with GCC 12.2 and Clang 16.0)
- [CMake](https://cmake.org/download) (tested with v3.25 and 3.31)
- [CUDA SDK](https://developer.nvidia.com/cuda-downloads) (tested with v12.4)
- Some development packages, approximately:
```sh
sudo apt-get update
sudo apt-get install build-essential cmake libx11-dev libxrandr-dev libxinerama-dev libxcursor-dev libxi-dev
```
Follow the usual way of building CMake projects on Linux:
- Clone the project recursively:
```sh
git clone --recursive https://github.com/NVIDIA-RTX/RTXNTC.git
```
- Create the build folder, configure and build:
```sh
mkdir build && cd build
cmake ..
make -j
```
## Integration Guide
The SDK provides several tools for incorporating NTC into a content pipeline and engine runtime.
The `LibNTC` library provides all functionality necessary to compress, serialize/deserialize, and decompress texture sets using whatever hardware acceleration is available. It also provides implementation of a GPU-based BCn encoder to allow for transcoding decompressed textures into block-compressed format at load time, as well as a shader library demonstrating how to decompress bundle texels directly in pixel shaders at sample time.
Additionally, the `ntc-cli` tool can be included as part of any script-based pipelines if that is preferable. This tool is based on `LibNTC` as well and can perform compression and decompression tasks as desired. For more information on syntax please consult the [NTC Command-Line Tool Documentation](docs/CommandLineTool.md). Texture bundle information can be specified directly from the command line; however, the `ntc-cli` tool can also be given a [Bundle Manifest File](docs/Manifest.md) which encodes image file path, semantic and formatting information, and desired transcoding formats into one conveient serialized object. Your content pipeline can either track this information using its own database or used the provided manifest format as desired.
Further details about specific usages of `LibNTC` can be found divided by topic in the following guides, which walk through a proposed pipeline for a typical NTC application:
1. Library Initialization
* [Installing LibNTC into your project](docs/integration/Installation.md)
* [Initializing the context](docs/integration/Context.md)
2. Compression
* [Compressing using LibNTC](docs/integration/Compression.md)
* [Compression using the `ntc-cli` tool](docs/CommandLineTool.md)
* [Texture Bundle Manifest Specification](docs/Manifest.md)
3. Decompression
* On Load
* [Decompressing texture sets with graphics APIs](docs/integration/InferenceOnLoad.md)
* [Transcoding to BCn and image comparison](docs/integration/BlockCompression.md)
* On Sample
* [Inference on Sample using Cooporative Vector](docs/integration/InferenceOnSample.md)
* On Feedback
* See the [Inference on Feedback section](docs/Renderer.md#inference-on-feedback-mode) in the Renderer's Readme file.
The [NTC Explorer](docs/Explorer.md) and [NTC Renderer](docs/Renderer.md) samples demonstrate using `LibNTC` for compression and decompression and can be used as a further reference for what an integrated content workflow might look like.
## Support
Please use GitHub issues or email [rtxntc-sdk-support@nvidia.com](mailto:rtxntc-sdk-support@nvidia.com) for developer support.
## License
[NVIDIA RTX SDKs LICENSE](LICENSE.txt)
This project includes NVAPI software. All uses of NVAPI software are governed by the license terms specified here: https://github.com/NVIDIA/nvapi/blob/main/License.txt
|
https://github.com/ovsky/sumi-emu
|
sumi-emu
Sumi | The latest, best and especially most performant Nintendo Switch emulator! Run Nintendo Switch titles on your Android, Windows, Mac and Linux devices :)
Languages: C++ (98.4%), CMake (0.8%), Kotlin (0.7%), GLSL (0.1%), NASL (0.0%), Python (0.0%)
.github/workflows
.github/workflows
.reuse
.reuse
CMakeModules
CMakeModules
LICENSES
LICENSES
dist
dist
...
.codespellrc
.codespellrc
.git-blame-ignore-revs
.git-blame-ignore-revs
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
> README.md
<h1 align="center">
🎲 Sumi – Efficient Nintendo Switch Emulator
</h1>
<p align="center">
<a href="https://github.com/ovsky/sumi-emu" target="_blank">
<img height="40%" width="20%" src="https://raw.githubusercontent.com/ovsky/sumi-emu/refs/heads/experimental/src/android/app/src/main/res/drawable/ic_sumi_full.png"><br>
</a>
</p>
<h4 align="center">
Join newly created Sumi Community on Discord!<br><br>
<a href="https://discord.gg/eZ7wC7Kr" target="_blank">
<img " src="https://img.shields.io/discord/1379376999617007776.svg?label=Sumi%20Community%20-%20Discord&logo=discord&logoColor=ffffff&color=rgba(40%2C%200%2C%20190%2C%200.8)&labelColor=404EED">
</a>
<!-- <a href="https://github.com/ovsky/sumi-emu/actions/workflows/ci.yml" target="_blank">
<img src="https://github.com/ovsky/sumi-emu/actions/workflows/ci.yml/badge.svg"><br>
</a> -->
</h4>
<h1 align="center">
</h1>
<p align="center">
<b><a href="CONTRIBUTING.md">Contributing Guide</a> • <a href="BUILDING.md">Building Guide</a></b>
</p>
<p align="center">
<b>Sumi</b> is an experimental multiplatform emulator that focuses on <b>ARMv8 Android™</b> devices and emulates the functionality of a <b>Nintendo Switch™</b> system, licensed under <a href="https://github.com/ovsky/sumi/blob/master/LICENSE.md"><b>GNU General Public License v3.0 or later</b></a>
</p>
---
Welcome to **Sumi**, a cutting-edge Nintendo Homebrew emulator designed to deliver an optimized experience for playing your favorite games and exploring new ones. Sumi is a **high-performance** and **easy-to-use** emulator, tailored for enthusiasts and developers alike.
> **Disclaimer**: Sumi is intended strictly for legal homebrew use and is not affiliated with or endorsed by Nintendo. Use of Sumi for pirated or unauthorized copies of games is strictly prohibited. Please respect game developers and support them by purchasing legitimate copies of their games.
---
## Downloads 📦
Ready to experience **Sumi**? ☺️
To get the latest version of **Sumi** emulator, reach our releases page:
https://github.com/ovsky/sumi-emu/releases
---
## Features ✨
- **Latest Technologies**: Consistently updated tech stack.
- **Best Performance**: The best efficiency solutions were implemented.
- **User-Friendly**: Clean and intuitive interface.
- **Cross-Platform**: Available on multiple platforms.
- **Homebrew Support**: Fully supports legal homebrew games and applications.
- **Ongoing Development**: Stay tuned for frequent updates as Sumi evolves!
---
## Suggested GPU Drivers 🚀
To achieve best performance, download the best driver for your device and apply it in Sumi
**Mesa / Adreno - Sources**<br>
[The most universal drivers]
[GitHub Releases - K11MCH1/AdrenoToolsDrivers](https://github.com/K11MCH1/AdrenoToolsDrivers/releases)
**Mesa / Adreno - Sources:**<br>
[Especially for old Android Devices]
[GitHub Releases - XForYouX/Turnip](https://github.com/XForYouX/Turnip_Driver/releases)
**Mesa / Adreno - Sources:**<br>
[These drivers are better performing on some devices]
[GitHub Releases - Tiago/Turnip](https://github.com/tiagosouzacandido/AdrenoToolsTurnipDrivers/releases)
***Anbernic / Retroid / AYN (Snapdragon Versions):**<br>
[The probably best drivers for Anbernic, especially for Snapdragon 865 handhelds]
[GitHub Sources - MrPurple/Freedreno](https://github.com/MrPurple666/freedreno-CI/releases)
**Mali GPU:**<br>
[Mali GPUs unfortunately have poor driver support.
So, we have prepared our own, Sumi's universal one, for Mali and 8 Elite. If you can, please try to find better one for your device!]
[GitHub Sources - Sumi/Sumi Universal Driver W1](https://github.com/user-attachments/files/20505703/Sumi.8.Elite.Driver.Experimental.W1.zip)
**Snapdragon 8 Elite / Adreno 830:**<br>
Unfortunately, Mesa do not provide support for 8 Elite, and to not have defined plans about supporting it:
https://gitlab.freedesktop.org/mesa/mesa/-/issues/12066.
Due to this, it is well known and do not suggested to use this SoC for emulation process.
K11MCH1's Mesa sources and fixed are closed, so it is not easy in any way.
Good information is that we are trying out best to support every user snd every device - so, we have started our internal process for porting drivers for 8 Elite. The development process of drivers for new SoC is really demanding and long. We hope, that we or anyone else will be able to provide the best Switch emulation drivers for 8 Elite users.
~ Suni Team :)
---
## DLC Installation Alternative 💾
We found that DLC installing of DLCs stopped in many titles. So, we have suggestion. You can install our last version on which all DLCs works properly, install every DLC you want and then reinstalling the latest version. I know, it's not a perfect solution, but we are consistently working on bringing back DLC support.
If you want to test it:
https://github.com/ovsky/sumi-emu/releases/tag/v0.3.0
---
## Getting Started 💡
1. **Download and Install**: Head over to the [downloads page](https://git.sumi-emu.org/Sumi/Sumi/releases) to grab the latest release.
2. **Add Homebrew Games**: Sumi is built to play homebrew games. Add them to your game directory and enjoy!
3. **Configure Your Settings**: Customize your emulator settings to suit your performance needs.
## Source Code 🔧
Sumi is an open-source project. You can find the source code on our official Git repository:
- [Sumi Source Code](https://git.sumi-emu.org/)
We welcome contributions! Check out the repository and feel free to submit issues or pull requests to help improve Sumi.
## Legal Disclaimer 📜
Sumi is a **homebrew** emulator designed to support legally created and distributed homebrew software. It does not support piracy, nor is it intended for illegal purposes. Using Sumi to play pirated copies of games is a violation of copyright law. Sumi is not affiliated with or endorsed by **Nintendo**, and all **Nintendo** trademarks and copyrights are the property of their respective owners.
> **We highly encourage users to respect intellectual property rights and to only use Sumi with legal, homebrew content.**
## License 📄
Sumi is licensed under the [GPL License](https://www.gnu.org/licenses/gpl-3.0.html). See the full license in the [LICENSE](LICENSE) file for more details.
## Contributing ✨
We are always looking for developers, testers, and enthusiasts to contribute to Sumi. Whether you want to submit a pull request, report an issue, or suggest new features, all contributions are welcome. Please follow our [contributing guidelines](CONTRIBUTING.md) to get started.
## Contact Us 📬
[WIP]
For any inquiries or to follow Sumi's development journey, reach out to us:
- **Official Website**: [https://sumi-emu.org](https://sumi-emu.org)
- **Source Code**: [https://github.com/ovsky/sumi-emu/](https://github.com/ovsky/sumi-emu)
- **Twitter**: [@SumiEmu](https://twitter.com/SumiEmu)
---
## Special Thanks 🩷
Huge thanks to **Sudachi Emulator** team for providing an amazing database and especially to **Citron Team** for his huge contribution to emulator development and continuous amazing work.
Without you this project would not exist!
Made with full love ❤️ for **[Citron](https://citron-emu.org/)** and **[Sudachi](https://sudachi.emuplace.app/)**
---
### Disclaimer
- **Nintendo Switch** is a trademark of **Nintendo Co., Ltd**
- **Android** is a trademark of **Google LLC**
|
https://github.com/davidesantangelo/krep
|
krep
Fast text search tool with advanced algorithms, SIMD acceleration, multi-threading, and regex support. Designed for rapid, large-scale pattern matching with memory-mapped I/O and hardware optimizations.
Languages: C (99.2%), Makefile (0.8%)
.github
.github
test
test
...
.gitignore
.gitignore
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
aho_corasick.c
aho_corasick.c
> README.md
# K(r)ep - A high-performance string search utility


`krep` is an optimized string search utility designed for maximum throughput and efficiency when processing large files and directories. It is built with performance in mind, offering multiple search algorithms and SIMD acceleration when available.
> **Note:**
> Krep is not intended to be a full replacement or direct competitor to feature-rich tools like `grep` or `ripgrep`. Instead, it aims to be a minimal, efficient, and pragmatic tool focused on speed and simplicity.
>
> Krep provides the essential features needed for fast searching, without the extensive options and complexity of more comprehensive search utilities. Its design philosophy is to deliver the fastest possible search for the most common use cases, with a clean and minimal interface.
## The Story Behind the Name
The name "krep" has an interesting origin. It is inspired by the Icelandic word "kreppan," which means "to grasp quickly" or "to catch firmly." I came across this word while researching efficient techniques for pattern recognition.
Just as skilled fishers identify patterns in the water to locate fish quickly, I designed "krep" to find patterns in text with maximum efficiency. The name is also short and easy to remember—perfect for a command-line utility that users might type hundreds of times per day.
## Key Features
- **Multiple search algorithms**: Boyer-Moore-Horspool, KMP, Aho-Corasick for optimal performance across different pattern types
- **SIMD acceleration**: Uses SSE4.2, AVX2, or NEON instructions when available for blazing-fast searches
- **Memory-mapped I/O**: Maximizes throughput when processing large files
- **Multi-threaded search**: Automatically parallelizes searches across available CPU cores
- **Regex support**: POSIX Extended Regular Expression searching
- **Multiple pattern search**: Efficiently search for multiple patterns simultaneously
- **Recursive directory search**: Skip binary files and common non-code directories
- **Colored output**: Highlights matches for better readability
- **Specialized algorithms**: Optimized handling for single-character and short patterns
- **Match Limiting**: Stop searching a file after a specific number of matching lines are found.
## Installation
### Using Homebrew (macOS)
If you are on macOS and have Homebrew installed, you can install `krep` easily:
```bash
brew install krep
```
### Building from Source
```bash
# Clone the repository
git clone https://github.com/davidesantangelo/krep.git
cd krep
# Build and install
make
sudo make install
# uninstall
sudo make uninstall
```
The binary will be installed to `/usr/local/bin/krep` by default.
### Requirements
- GCC or compatible C compiler
- POSIX-compliant system (Linux, macOS, BSD)
- pthread support
### Build Options
Override default optimization settings in the Makefile:
```bash
# Disable architecture-specific optimizations
make ENABLE_ARCH_DETECTION=0
```
## Usage
```bash
krep [OPTIONS] PATTERN [FILE | DIRECTORY]
krep [OPTIONS] -e PATTERN [FILE | DIRECTORY]
krep [OPTIONS] -f FILE [FILE | DIRECTORY]
krep [OPTIONS] -s PATTERN STRING_TO_SEARCH
krep [OPTIONS] PATTERN < FILE
cat FILE | krep [OPTIONS] PATTERN
```
## Usage Examples
Search for a fixed string in a file:
```bash
krep -F "value: 100%" config.ini
```
Search recursively:
```bash
krep -r "function" ./project
```
Whole word search (matches only complete words):
```bash
krep -w 'cat' samples/text.en
```
Use with piped input:
```bash
cat krep.c | krep 'c'
```
## Command Line Options
- `-i, --ignore-case` Case-insensitive search
- `-c, --count` Count matching lines only
- `-o, --only-matching` Print only the matched parts of lines
- `-e PATTERN, --pattern=PATTERN` Specify pattern(s). Can be used multiple times.
- `-f FILE, --file=FILE` Read patterns from FILE, one per line.
- `-m NUM, --max-count=NUM` Stop searching each file after finding NUM matching lines.
- `-E, --extended-regexp` Use POSIX Extended Regular Expressions
- `-F, --fixed-strings` Interpret pattern as fixed string(s) (default unless -E is used)
- `-r, --recursive` Recursively search directories
- `-t NUM, --threads=NUM` Use NUM threads for file search (default: auto)
- `-s STRING, --string=STRING` Search in the provided STRING instead of file(s)
- `-w, --word-regexp` Match only whole words
- `--color[=WHEN]` Control color output ('always', 'never', 'auto')
- `--no-simd` Explicitly disable SIMD acceleration
- `-v, --version` Show version information
- `-h, --help` Show help message
## Performance Benchmarks
Comparing performance on the same text file with identical search pattern:
| Tool | Time (seconds) | CPU Usage |
|---------|---------------:|----------:|
| krep | 0.106 | 328% |
| grep | 4.400 | 99% |
| ripgrep | 0.115 | 97% |
*Krep is approximately 41.5x faster than grep and slightly faster than ripgrep in this test. Benchmarks performed on Mac Mini M4 with 24GB RAM.*
The benchmarks above were conducted using the subtitles2016-sample.en.gz dataset, which can be obtained with:
```bash
curl -LO 'https://burntsushi.net/stuff/subtitles2016-sample.en.gz'
```
## How Krep Works
Krep achieves its high performance through several key techniques:
### 1. Smart Algorithm Selection
Krep automatically selects the optimal search algorithm based on the pattern and available hardware:
- **Boyer-Moore-Horspool** for most literal string searches
- **Knuth-Morris-Pratt (KMP)** for very short patterns and repetitive patterns
- **memchr optimization** for single-character patterns
- **SIMD Acceleration** (SSE4.2, AVX2, or NEON) for compatible hardware
- **Regex Engine** for regular expression patterns
- **Aho-Corasick** for efficient multiple pattern matching
### 2. Multi-threading Architecture
Krep utilizes parallel processing to dramatically speed up searches:
- Automatically detects available CPU cores
- Divides large files into chunks for parallel processing
- Implements thread pooling for maximum efficiency
- Optimized thread count selection based on file size
- Careful boundary handling to ensure no matches are missed
### 3. Memory-Mapped I/O
Instead of traditional read operations:
- Memory maps files for direct access by the CPU
- Significantly reduces I/O overhead
- Enables CPU cache optimization
- Progressive prefetching for larger files
### 4. Optimized Data Structures
- Zero-copy architecture where possible
- Efficient match position tracking
- Lock-free aggregation of results
### 5. Skipping Non-Relevant Content
When using recursive search (`-r`), Krep automatically:
- Skips common binary file types
- Ignores version control directories (`.git`, `.svn`)
- Bypasses dependency directories (`node_modules`, `venv`)
- Detects binary content to avoid searching non-text files
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Author
- **Davide Santangelo** - [GitHub](https://github.com/davidesantangelo)
## License
This project is licensed under the BSD-2 License - see the LICENSE file for details.
Copyright © 2025 Davide Santangelo
|
https://github.com/macos-fuse-t/scorpi
|
scorpi
Scorpi - A Modern Hypervisor (for macOS)
Languages: C (99.8%)
firmware
firmware
include
include
libfdt
libfdt
libnv
libnv
libutil
libutil
...
.clang-format
.clang-format
BUILD.md
BUILD.md
LICENSE
LICENSE
README.md
README.md
build.sh
build.sh
> README.md
# Scorpi - A Modern Lightweight General-Purpose Hypervisor
## Overview
Scorpi is a modern, lightweight, general-purpose hypervisor designed to be an alternative to QEMU.
### Key Features
- **Modern**: Implements only modern devices, primarily VirtIO-based, avoiding legacy emulations.
- **Lightweight**: Built on FreeBSD Bhyve and written in C, with minimal code base for emulating devices.
- **General-Purpose**: Supports headless and graphical VMs, EFI boot loader, and ACPI. Can run Linux and Windows VMs.
- **Modular**: Designed to be used as an API in other applications and services. Graphics, UI and user input are separate modules, and networking can be modularized as well.
## Platform Support
Currently, Scorpi runs on Mac ARM64 using Apple's Hypervisor Framework. The plan is to expand support to:
- **Linux x86 and ARM** using **KVM**
- **Additional architectures**, including **RISC-V**
## Available Bootloaders
1. **U-Boot** - Fast and compact but lacks some advanced features such as ACPI and graphics. Best used for headless VMs that require fast start.\
[Source Code](https://github.com/macos-fuse-t/u-boot)
2. **EDK2 UEFI** - Full-featured bootloader that provides ACPI support, frame buffer, and a variety of boot device drivers.\
[Source Code](https://github.com/macos-fuse-t/edk2)
## Running Linux VMs
1. Download an ISO that supports ARM<sub>64</sub> architecture.
2. Create an empty disk with:
```sh
mkfile -n [size] [img_file]
```
3. Example command to start a VM:
```sh
./builddir/scorpi -s 0,hostbridge -o console=stdio -o bootrom=./firmware/SCORPI_EFI.fd -s 1,xhci -u kbd -u tablet -s 2,virtio-blk,[img_file] -s 3,virtio-blk,[iso_file],ro -s 4,virtio-net,slirp -s 5,virtio-gpu,hdpi=on -m 2G -c 2 -l /tmp/vm_sock vm1
```
To use a graphical viewer, refer to the following reference project: [ScorpiViewer](https://github.com/macos-fuse-t/ScorpiViewer)
## Running a Windows VM
The easiest way to try a Windows 11 VM is to download a Microsoft HyperV preview image and convert it to Scorpi.
1. Download a VHDX image from:\
[Windows Insider Preview ARM<sub>64</sub>](https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewarm64)
2. Convert the disk image to Scorpi format using `qemu-img`:
```sh
qemu-img convert -f vhdx -O raw input.vhdx win11.img
```
3. Run Scorpi:
```sh
./builddir/scorpi -s 0,hostbridge -o bootrom=./firmware/SCORPI_EFI.fd -s 1,xhci -u kbd -u tablet -u net,backend=slirp -s 2,ahci-hd,win11.img -s 3,virtio-gpu,fb=on -l /tmp/vm_sock -c 4 -m 4G vm1
```
4. Run ScorpiViewer.
## Future Roadmap
- Implement and add missing features (file sharing, copy/paste support)
- Implement Linux support on top of KVM.
- Add Windows DirectX 12 display driver.
- Extend support to RISC-V and other platforms.
## Releated Projects
[U-Boot bootloader](https://github.com/macos-fuse-t/u-boot)
[EDK2 bootloader](https://github.com/macos-fuse-t/edk2)
[Scorpi Viewer](https://github.com/macos-fuse-t/ScorpiViewer)
## Licensing
Scorpi is released under a **permissive license**, providing flexibility for various use cases.
## Get Involved
Contributions and feedback are welcome! Stay tuned for updates as Scorpi evolves into a powerful and versatile hypervisor.
For inquiries, contact **Alex Fishman** at [alex@fuse-t.org](mailto\:alex@fuse-t.org).
|
https://github.com/nerficg-project/HTGS
|
HTGS
Official code release for "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency"
Languages: Cuda (74.1%), Python (10.1%), C++ (8.6%), C (7.2%)
HTGSCudaBackend
HTGSCudaBackend
...
.gitignore
.gitignore
LICENSE
LICENSE
Loss.py
Loss.py
Model.py
Model.py
README.md
README.md
> README.md
# Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, Marcus Magnor<br>
| [Project page](https://fhahlbohm.github.io/htgs/) | [Paper](https://arxiv.org/abs/2410.08129) | [Evaluation Images (9 GB)](https://graphics.tu-bs.de/upload/publications/hahlbohm2025htgs/htgs_full_eval.zip) | [Colab](https://colab.research.google.com/drive/1DxnIqrZ-eSSvfjhK9P1JdibABm_AJEFp?usp=sharing) |<br>
## Overview
This repository contains the official implementation of "Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency".
It is provided as an extension to the [NeRFICG](https://github.com/nerficg-project) framework.
Beyond just trying out our method, you have come to the right place if you are looking for one of the following:
- An exact and numerically stable method for computing tight screen-space bounds of a 3D Gaussian under perspective projection.
- An efficient and numerically stable approach for evaluating a 3D Gaussian at its point of maximum contribution along a ray.
- A fast and view-consistent hybrid transparency approach for blending that accelerates training and rendering without sacrificing quality.
- A fast implementation of the above, including optimized CUDA kernels.
We further provide optimized implementations for three additional blending modes alongside our default hybrid transparency blending:
1. `HYBRID_BLEND`: The K foremost fragments (core) in each pixel are alpha-blended. Remaining fragments are accumulated into an order-independent tail. Core and tail are then alpha composited to obtain the final color.
2. `ALPHA_BLEND_FIRST_K`: The K foremost fragments (core) in each pixel are alpha-blended. Remaining fragments are discarded. Same as `HYBRID_BLEND`, but without the tail.
3. `ALPHA_BLEND_GLOBAL_ORDERING`: Gaussians are "globally" sorted based on their means' z-coordinate in camera space. All fragments in each pixel are then alpha-blended in this approximate order.
4. `OIT_BLEND`: An order-independent transparency (OIT) approach that accumulates all fragments in each pixel using a weighted sum. Same as the tail in `HYBRID_BLEND`, but for all fragments.
You can find additional notes at the bottom of this page.
## Getting Started
### Our Setup
All our tests as well as experiments for the paper were conducted using the following setup:
- Operating System: Ubuntu 22.04
- GPU: Nvidia GeForce RTX 4090
- CUDA Driver Version: 535.183.01
- CUDA Toolkit Version: 11.8
- Python Version: 3.11
- PyTorch Version: 2.5.1
We have verified that everything works with CUDA Toolkit Version 12.4, but did not measure performance.
We also observed a significant performance regression (~20%) when using CUDA Driver Version 560 but are unsure of the exact reason (remaining setup was unchanged, except for the CUDA Toolkit Version where we tried both 11.8 and 12.4).
### Setup
As a preparatory step, the [NeRFICG framework](https://github.com/nerficg-project/nerficg) needs to be set up.
<details>
<summary><span style="font-weight: bold;">TL;DR NeRFICG Setup</span></summary>
- Clone the NeRFICG repository and its submodules:
```shell
git clone git@github.com:nerficg-project/nerficg.git --recursive && cd nerficg
```
- Install the dependencies listed in `scripts/condaEnv.sh`, or automatically create a new conda environment by executing the script:
```shell
./scripts/condaEnv.sh && conda activate nerficg
```
- [optional] For logging via [Weights & Biases](https://wandb.ai/site), run the following command and enter your account identifier:
```shell
wandb login
```
</details>
<br>
Now, you can directly add this project as an additional method:
- Clone this repository into the `src/Methods/` directory:
```shell
git clone git@github.com:nerficg-project/HTGS.git src/Methods/HTGS
```
- install all method-specific dependencies and CUDA extensions using:
```shell
./scripts/install.py -m HTGS
```
## Training and Inference
The HTGS method is fully compatible with the NeRFICG scripts in the `scripts/` directory.
This includes config file generation via `defaultConfig.py`,
training via `train.py`,
inference and performance benchmarking via `inference.py`,
metric calculation via `generateTables.py`,
and live rendering via `gui.py` (Linux only).
We also used these scripts for the experiments in our paper.
For detailed instructions, please refer to the [NeRFICG framework repository](https://github.com/nerficg-project/nerficg).
### Example Configuration Files
We provide exemplary configuration files for the garden scene from the [Mip-NeRF360](https://jonbarron.info/mipnerf360/) dataset as well as the playground scene from the [Tanks and Temples](https://www.tanksandtemples.org/) dataset.
For the eight *intermediate* scenes from the Tanks and Temples dataset on which we evaluate our method in the paper, we used [our own calibration](https://cloud.tu-braunschweig.de/s/J5xYLLEdMnRwYPc) obtained using COLMAP.
We recommend copying the exemplary configuration files to the `configs/` directory.
*Note:* There will be no documentation for the method-specific configuration parameters under `TRAINING.XXXX`/`MODEL.XXXX`/`RENDERER.XXXX`.
Please conduct the code and/or our paper for understanding what they do.
### Using Custom Data
While this method is compatible with most of the dataset loaders provided with the [NeRFICG framework](https://github.com/nerficg-project/nerficg),
we recommend using exclusively the Mip-NeRF360 loader (`src/Datasets/MipNeRF360.py`) for custom data.
It is compatible with the COLMAP format for single-camera captures:
```
custom_scene
└───images
│ │ 00000.jpg
│ │ ...
│
└───sparse/0
│ │ cameras.bin
│ │ images.bin
│ │ points3D.bin
│
└───images_2 (optional)
│ │ 00000.jpg
│ │ ...
```
To use it, just modify `DATASET.PATH` near the bottom of one of the exemplary configuration files. Furthermore, you may want to modify the following dataset configuration parameters:
- `DATASET.IMAGE_SCALE_FACTOR`: Set this to `null` for using the original resolution or a between zero and one to train on downscaled images.
If `DATASET.USE_PRECOMPUTED_DOWNSCALING` is set to `true` specifying `0.5`/`0.25`/`0.125` will load images from directories `images_2`/`images_4`/`images_8` respectively.
We recommend using this feature and downscaling manually via, e.g., `mogrify -resize 50% *.jpg` for the best results.
- `DATASET.TO_DEVICE`: Set this to `false` for large datasets or if you have less than 24 GB of VRAM.
- `DATASET.BACKGROUND_COLOR`: Will be ignored (see section "Additional Notes" for more information).
- `DATASET.NEAR_PLANE`: Must not be too small to avoid precision issues. We used `0.2` for all scenes.
- `DATASET.FAR_PLANE`: Set this generously, i.e., not too tight for your scene to avoid precision issues. We used `1000.0` for all scenes.
- `DATASET.TEST_STEP`: Set to `8` for the established evaluation protocol. Set to `0` to use all images for training.
- `DATASET.APPLY_PCA`: Tries to align the world space so that the up-axis is parallel to the direction of gravity using principal component analysis.
Although it does not always work, we recommend setting this to `true` if you want to view the final model inside a GUI.
While we recommend setting `DATASET.APPLY_PCA_RESCALE` to `false`, it can be turned on to scale the scene so that all camera poses are inside the \[-1, 1\] cube.
If using your custom data fails, you have two options:
1. (Easy) Re-calibrate using, e.g., `./scripts/colmap.py -i <path/to/your/scene> --camera_mode single` and add `-u` at the end if your images are distorted.
2. (Advanced) Check the NeRFICG instructions for using custom data [here](https://github.com/nerficg-project/nerficg?tab=readme-ov-file#training-on-custom-image-sequences) and optionally dive into the NeRFICG code to extend one of the dataloaders to handle your data.
### Exporting as .ply
*Disclaimer:* The downstream application for which you use these .ply files must do a ray-based evaluation of 3D Gaussians to get the correct results.
Expect to see artifacts if the application uses the EWA splatting approach as in standard 3DGS.
We provide a script `export_ply.py` inside this repository, which extracts all 3D Gaussians from a trained model into a .ply file.
For compatibility reasons, we provide the output in the same format as the 3DGS implementation by Inria.
To use the script, move it to the `scripts/` directory of your NeRFICG installation.
Running it is similar to the `inference.py` script:
```
./scripts/export_ply.py -d output/HTGS/<OUTPUT_DIRECTORY>
```
## Additional Notes
The primary goal of this codebase is to provide a foundation for future research.
As such, we have made an effort to keep the CUDA code of the four different blending modes mostly independent of each other.
This results in a noteworthy amount of code duplication, but should allow for easy modification and extension of the individual blending modes.
We also highlight areas where we think our method could be improved or extended:
<details>
<summary><span>Densification</span></summary>
A side effect of using a ray-based evaluation for the 3D Gaussians during rendering is that the positional gradients which standard 3DGS uses for densification have significantly different properties.
Similar to much of the concurrent work in this direction, we observed this to be a major challenge and had to come up with a solution.
You can find a detailed description of our modifications in our paper.
However, we would like to clarify that the current densification strategy of our method is far from being optimal.
For example, on the bicycle scene from the Mip-NeRF360 dataset the number of Gaussians increases up to 8M in the first half of training, but then the importance pruning reduces this to 4.5M in iteration 16,000 while quality metrics go up.
Observations like these lead us to believe that an optimal densification strategy could drastically decrease training times and likely also improve reconstruction quality.
</details>
<details>
<summary><span>Anti-aliasing</span></summary>
We use a single ray through the center of each pixel for the ray-based evaluation of the 3D Gaussians.
Therefore, it is possible for Gaussians to fall between pixels making them invisible during rendering.
This is a standard aliasing problem, which the EWA splatting algorithm used in 3DGS resolves by applying a low-pass filter to the 2D covariance matrix (`+0.3`).
With a ray-based evaluation, however, this solution is not available as Gaussians are not evaluated on the image plane but in 3D space instead.
In contrast to other recent works that do ray-based 3D Gaussian rendering, we are not required to limit the minimum size of Gaussians because our inversion-free method for computing screen-space bounding boxes and ray-based evaluation can handle even degenerate Gaussians where a Gaussian's extent in one of its major axes is zero.
Nonetheless, our approach still has the aforementioned aliasing problems.
The corresponding artifacts become visible when you open a reconstructed model inside our GUI and zoom out until you see subtle flickering upon camera movement.
To avoid this being a problem during training, we employ the 3D filter from Mip-Splatting that tries to prevent 3D Gaussians from becoming smaller than a pixel in the closest training camera by applying a bit of signal theory.
We think, that this solution is far from optimal and should be addressed in the future.
It is worth noting, that a solution for this problem could likely be applied to all methods that do ray-based evaluation of 3D Gaussians or constant density ellipsoids.
</details>
<details>
<summary><span>Near plane clipping</span></summary>
Our approach for computing tight and perspective-correct bounding boxes for each 3D Gaussian is currently unable to handle certain edge-cases.
Looking at the 3D Gaussians in camera space, our approach can deal with arbitrary extents along the x-axis and y-axis, but fails to do so for certain cases with respect to extents along the z-axis.
In our implementation, we therefore cull all Gaussians for which the ellipsoid obtained by applying the used cutoff value to the 3D Gaussian is not fully between the z=near and z=far planes.
It is easy to see that this results in some Gaussians being culled, although they should be partially visible.
Especially at the near plane, this can make a major difference.
It straightforward to extend our bounding box computation and culling to not discard Gaussians whose corresponding ellipsoid is fully between the z=0 and z=far planes instead.
However, including such Gaussians would still not be enough and further complicates how the point of maximum contribution should be calculated during blending as the point of maximum contribution along a viewing ray might not lie behind the near plane anymore.
It would be nice to see a more elegant solution that matches what is possible with, e.g., an OptiX-based implementation that uses a spatial acceleration structures to determine intersections.
</details>
<details>
<summary><span>Separate alpha thresholds for core and tail</span></summary>
To obtain good results, our hybrid transparency blending currently requires using a higher alpha threshold than what is used in standard 3DGS (`0.05` vs. `0.0039`).
While it is possible to also use `0.05` as the threshold for the tail, we found that using `0.0039` for the tail results in better quality.
We think a higher alpha threshold is needed for the core because of its limited capacity.
More precisely, we observe accurate results if the core uses most of the transmittance in each pixel, which is not possible if the core is occupied by low-alpha fragments.
Extending the blending function to account for overlap between Gaussians may resolve this issue.
An interesting byproduct of this two-threshold approach is that some view-dependent effects are represented by very large Gaussians with low opacity that will always be part of the tail.
However, further analysis is needed to understand if this is generally bad or could even be beneficial in some cases.
</details>
<details>
<summary><span>Background color blending</span></summary>
As we mainly looked at unbounded real-world scenes, input images did not have an alpha mask. This led us to use black as the background color for all scenes.
For black background, blending the background color into the final image is mathematically equivalent to not doing anything in that regard.
Therefore, our rasterization module currently only supports having a black background.
However, it should be reasonably simple to extend our rasterization module to handle arbitrary background colors during both optimization and inference.
</details>
## License and Citation
This project is licensed under the MIT license (see [LICENSE](LICENSE)).
If you use this code for your research projects, please consider a citation:
```bibtex
@article{hahlbohm2025htgs,
title = {Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency},
author = {Hahlbohm, Florian and Friederichs, Fabian and Weyrich, Tim and Franke, Linus and Kappel, Moritz and Castillo, Susana and Stamminger, Marc and Eisemann, Martin and Magnor, Marcus},
journal = {Computer Graphics Forum},
volume = {44},
number = {2},
doi = {10.1111/cgf.70014},
year = {2025},
url = {https://fhahlbohm.github.io/htgs/}
}
```
|
https://github.com/SamuelTulach/HookGuard
|
HookGuard
Hooking Windows' exception dispatcher to protect process's PML4
Languages: C (98.0%), C++ (2.0%)
Assets
Assets
HookGuard
HookGuard
TestProcess
TestProcess
...
.gitignore
.gitignore
README.md
README.md
> README.md
# HookGuard
Inspired by popular game anti-cheat solutions, this project utilizes a global kernel exception hook to obfuscate the target process's PML4 table address and log every attempted address space switch into the target process, while not triggering [PatchGuard](https://en.wikipedia.org/wiki/Kernel_Patch_Protection) and remaining [HVCI compatible](https://learn.microsoft.com/en-us/windows-hardware/drivers/bringup/device-guard-and-credential-guard).

## How does this work?
> [!WARNING]
> To have at least some chance of understanding what is going on, make sure you are aware of [the basics of memory paging](https://connormcgarr.github.io/paging/).
The value that is loaded into the `CR3` register on a [context switch](https://www.techtarget.com/whatis/definition/context-switch) or when [attaching to a process](https://learn.microsoft.com/en-us/windows-hardware/drivers/debuggercmds/-attach--attach-to-process-) (`KiAttachProcess`) is stored in `KPROCESS->DirectoryTableBase`.

If the value being written to `CR3` has any reserved bits (48:63) set to `1`, a general protection (`#GP`) exception will be triggered.

Under normal circumstances, this would result in an immediate system crash, but with a clever hook chain, it can be used to log and control any attempted writes of such a value.
Exceptions are handled in the `KiDispatchException` function. For kernel-mode exceptions, the debugger routine `KdTrap` is called first, and only if it returns `0`, indicating that the exception was not handled by the debugger, `RtlDispatchException` will be used, which will attempt to execute the appropriate exception handler.

`KdTrap` checks whether `KdpDebugRoutineSelect` is not null, and if so, it executes `KdpTrap`.

Unless the exception code is `0x80000003` (`STATUS_BREAKPOINT`), `KdpReport` will be executed, and the function will return.

`KdpReport` normally returns `0` and exits immediately (apart from debugger-specific exceptions) unless `NtGlobalFlag` has `FLG_STOP_ON_EXCEPTION`. This is not enabled by default, so we have to overwrite it. This will cause every exception to go to `KdEnterDebugger`, which then executes `KeFreezeExecution`.

Inside `KeFreezeExecution`, `KeStallExecutionProcessor` will be called if `KdDebuggerLock` is locked (has a non-zero value). We will overwrite it to always be `1`.

Finally, `KeStallExecutionProcessor` will end up calling `HalpStallCounter+0x70`, which we can overwrite to hook.

In our hook, we will perform a stack walk to locate `KeFreezeExecution` to get the original IRQL (changed in this function) and `KdTrap` to get the exception record and context.
```c
CONTEXT frames[10] = { 0 };
for (ULONG frame = 0; frame < 10; frame++)
{
ULONG64 imageBase;
const PRUNTIME_FUNCTION runtimeFunction = RtlLookupFunctionEntry(current.Rip, &imageBase, NULL);
if (!runtimeFunction)
break;
PVOID handlerData;
ULONG64 establisherFrame;
KNONVOLATILE_CONTEXT_POINTERS nvContext = { 0 };
RtlVirtualUnwind(
UNW_FLAG_NHANDLER,
imageBase,
current.Rip,
runtimeFunction,
¤t,
&handlerData,
&establisherFrame,
&nvContext);
if (!current.Rip)
break;
frames[frame] = current;
if (!(current.Rip >= g_KdTrap && current.Rip < g_KdTrap + 0x50))
continue;
/*
* 0: HookGuard!HookEntry+0x2d
* 1: nt!KeStallExecutionProcessor+0x9b
* 2: nt!KeFreezeExecution+0x110
* 3: nt!KdEnterDebugger+0x6d
* 4: nt!KdpReport+0x74
* 5: nt!KdpTrap+0x160
* 6: nt!KdTrap+0x2d
*/
const ULONG64 originalIrql = *(ULONG64*)(frames[2].Rsp + sizeof(ULONG64) * 1);
_enable();
__writecr8(originalIrql);
const PEXCEPTION_RECORD exceptionRecord = *(PEXCEPTION_RECORD*)current.Rsp;
const PCONTEXT exceptionContext = *(PCONTEXT*)(current.Rsp + sizeof(ULONG64) * 10);
/* ... */
}
```
Then, we can implement custom exception handling when an invalid `CR3` is written to for example use a custom hash function on it.
```c
VOID HookHandlePrivilegedInstruction(PEXCEPTION_RECORD exceptionRecord, PCONTEXT context)
{
if (exceptionRecord->ExceptionCode != STATUS_PRIVILEGED_INSTRUCTION)
return;
// mov cr3, xxx
if (*(PWORD)context->Rip != 0x220F)
return;
BYTE operand = *(PBYTE)(context->Rip + 2);
operand &= 7;
const UINT64* registers = &context->Rax;
const UINT64 invalidCr3 = registers[operand];
CR3 cr3;
cr3.AsUInt = invalidCr3;
cr3.Reserved3 = 0x0;
cr3.AddressOfPageDirectory = GuardCrypt(cr3.AddressOfPageDirectory);
KdpPrint("Fixing CR3 from 0x%p to 0x%p\n", invalidCr3, cr3.AsUInt);
InterlockedIncrement64(&g_TotalResolved);
__writecr3(cr3.AsUInt);
context->Rip += 3;
g_ZwContinue(context, FALSE);
HookBreakpoint();
}
```
Remember that under normal circumstances, `KdTrap` would return `0`, and `RtlDispatchException` would be called. This is not the case here due to our modifications. We can either try to restore the context and return from `KdTrap`, or, if we want to make our life easier, we can just directly call `RtlDispatchException`.
```c
if (exceptionRecord->ExceptionCode == STATUS_PRIVILEGED_INSTRUCTION)
HookHandlePrivilegedInstruction(exceptionRecord, exceptionContext);
g_RtlDispatchException(exceptionRecord, exceptionContext);
```
We also need to make two more overwrites. One is `KdIgnoreUmExceptions`, so that `KdTrap` won't get called when handling user-mode exceptions, and the other is `PoAllProcIntrDisabled` to speed up execution.

## Compiling and testing
To compile the project, you will need [Visual Studio 2022](https://visualstudio.microsoft.com/) with the [Windows Driver Kit (WDK)](https://learn.microsoft.com/en-us/windows-hardware/drivers/download-the-wdk) and the associated Windows SDK installed.
Compile the driver with a test signing certificate, and enable [test signing mode](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/the-testsigning-boot-configuration-option) if you haven't already:
```
bcdedit /set TESTSIGNING ON
```
Register and start the driver:
```
sc create HookGuard binPath="C:\HookGuard.sys" type=kernel
sc start HookGuard
```
Launch the test process `TestProcess.exe`. An internet connection is required so that [debug symbols](https://en.wikipedia.org/wiki/Debug_symbol) can be downloaded and parsed.
> [!TIP]
> If you are going to use a kernel-mode debugger, keep in mind that breakpoints also trigger exceptions and subsequent calls to the hooked routine. Placing breakpoints in certain places (like the start of `KdTrap`) will cause the debugger and system to hang.
## Compatibility
This project was tested on Windows 10 22H2 19045.5011. **Windows 11 is not supported**, as it does not use `HalpStallCounter` in `KeStallExecutionProcessor`.
However, it is possible to use `HalpPerformanceCounter` instead. I have already spent a few dozen hours jumping between IDA and WinDbg, so I am just going to leave it as is. If I ever revisit this project, reworking it to function on Windows 11 would be the priority.

## Credits
- [EasyPdb](https://github.com/Kwansy98/EasyPdb) - used by `TestProcess.exe` to send offsets into the driver
- [ia32-doc](https://github.com/ia32-doc/ia32-doc) - platform definitions
- [EAC CR3 protection article](https://0avx.github.io/posts/easyanticheat-cr3-protection/) by [@0avx](https://github.com/0avx) - at least to my knowledge, this is one of the first more in-depth articles about the subject, inspiration
|
https://github.com/AaronFriel/qmodem-4.51
|
qmodem-4.51
Languages: Pascal (93.9%), Assembly (5.9%)
src
src
...
README.md
README.md
> README.md
# QModem 4.51 Source Code
The source code release of **QModem 4.51**, an MS-DOS telecommunications program authored by John Friel III (1960–2024). This source snapshot reflects the state of QModem "Test-Drive" edition, version 4.51, as it existed in early 1992. The release is presented in the hope it may prove valuable as a historical artifact, for telecommunications enthusiasts, retrocomputing hobbyists, or anyone interested in the inner workings of a classic DOS comms package.
QModem was a widely-used terminal communications program for MS-DOS, supporting a rich array of modem protocols, scripting, user customization, modem auto-configuration, and even a "Host Mode" for basic BBS-like operation.
---
## Historical Overview
**QModem** was developed throughout the 1980s and early 1990s as a competitor to programs such as Procomm, Telix, and others. It provided robust support for:
- Many modem speeds and hardware types (8250, 16450, 16550 UARTs, and special hardware)
- Internal and external file transfer protocols: XMODEM, YMODEM, ZMODEM, and user-defined protocol support via external drivers
- Full-screen dialing directory (the `.FON` phonebook)
- Extensive scripting and automation via its built-in script language
- In-program configuration via a full-featured setup menu (`Alt-N`)
- ANSI/VT100/TTY/Avatar terminal emulations
- Host Mode: a mini BBS server included in the client!
- Scrollback buffer, split screen terminal
- Mouse support, custom keyboard macros, and more
---
## File Layout and Project Structure
This repository contains the complete Turbo Pascal source code, as well as supporting assembler, batch, and utility files.
### Main Directories and Files
- **.PAS** — Turbo Pascal source files implementing the main program, modules, and utilities
- **.ASM** — x86 assembler routines for performance-critical sections and hardware interfacing
- **.BAT** — DOS batch files for building, testing, and packaging
- **.OBJ, .INC** — Included binaries and Pascal include files
- **.KEY, .FON, .CNF, etc.** — Sample data, key, configuration, or phonebook files
Significant modules include:
- `QMODEM.PAS` — Main entry point
- `QMMAIN.PAS` — Main application logic
- `INITIAL.PAS` — Global configuration, terminal, and comm settings
- `COMM.PAS` / `COMM2.PAS` — Serial communications support
- `DOWNLD*.PAS` / `UPLD*.PAS` — File transfer protocol implementations
- `TP*`, `OP*` — Support code, likely Turbo Professional or custom libraries
- `HOST.PAS` — Host Mode/BBS functionality
- `FONESTUF.PAS`, `QDIAL.PAS`, etc. — Dialing directory and phonebook features
- `SCRIPTS*.PAS` — Script engine and automation
- `INSTALL*.PAS`, `QINSTALL.PAS`, `RUNQINST.PAS` — On-disk configuration and setup utility
---
## Building QModem
**This is a historical codebase.** QModem 4.51 targets MS-DOS using Turbo Pascal 5.x/6.0, with Turbo Professional and potentially other Borland or third-party libraries.
### Potential Build Approaches
- **Turbo Pascal 5.5/6.0** (MS-DOS or DOSBox): This is almost certainly the original toolchain. If you have a copy, opening `QMODEM.PAS` as the project and compiling (after possibly setting appropriate memory and overlay paths) may work. Some makefiles or batch files, e.g. `BUILD.BAT`, may be helpful, but will need adaptation to your environment.
- **TP/BP Emulation or Cross-Compilers**: [Free Pascal](https://www.freepascal.org/) includes some support for Turbo Pascal compatibility, but differences are likely extensive (including use of inline assembler, overlays, and third-party libraries).
- **Turbo Professional & Dependencies**: Many of the `TP*` units (e.g. `TpDos`, `TpCrt`, etc.) are from the [Turbo Professional library](https://en.wikipedia.org/wiki/Turbo_Professional). You'll need the corresponding TPUs and sources for your compiler version.
- **Manual Assembly of .ASM Files**: Assembler files need to be assembled (e.g. with TURBO assembler or MASM) and linked or compiled as .OBJ for use with Turbo Pascal.
- **Overlay Management**: Note the project extensively uses Borland/Turbo Pascal overlays (`.OVR` files, see `OVR01.INC` and overlay units). Disk layout and path settings for overlays must be matched as the original program expects.
#### Build Scripts
Several build-automation batch files are included, such as:
- `BUILD.BAT`
- `BUILDOVR.BAT`
- `BUG.BAT`
- `DEBUGOVR.BAT`
Inspect and adapt these scripts as necessary for your own environment.
### Modernization Caveats
- **No supported modern environment** targets this code directly. Efforts to port or run on anything but MS-DOS/Turbo Pascal 5.x/6.x are purely experimental and will require code and/or dependency adaptation.
- **Third-party libraries** (Turbo Professional, OpKey, possibly others) are required.
- **Hardware-dependence**: Much code assumes direct access to PC hardware, BIOS, and serial port interrupts.
- **Overlay management**: The overlay system (`OVERLAY.PAS`, etc.) must be supported as originally intended.
---
## Usage
This repository is for study, education, restoration, and historical curiosity. See the original QModem documentation (not included here) for user guidance. The commands, batch files, and source code reflect MS-DOS conventions and expectations.
---
**John Friel III, 1960–2024**
|
https://github.com/wqzustc/High-Performance-Tensor-Processing-Engines
|
High-Performance-Tensor-Processing-Engines
Some Hardware Architectures for GEMM
Languages: Verilog (41.4%), SystemVerilog (41.4%), Tcl (10.9%), Fortran (4.9%), Makefile (1.1%), Shell (0.3%)
OPT1
OPT1
OPT2
OPT2
OPT3_OPT4C
OPT3_OPT4C
assets
assets
library
library
...
LICENSE
LICENSE
README.md
README.md
> README.md
# Technical Documentation and Interface Description
of [Exploring the Performance Improvement of Tensor Processing Engines through Transformation in the Bit-weight Dimension of MACs | IEEE Conference Publication | IEEE Xplore](https://ieeexplore.ieee.org/abstract/document/10946737) (HPCA 2025)

- Testing Process Library: For reproducibility, the Synopsys official educational library SAED32nm was used in this repositories(path: "library/saed32rvt\_tt0p85v25c.db"; the full set of process corners can be downloaded from the official website).
**Key Notes:**
- **SAED32nm**: A Synopsys-provided educational PDK for 32nm process training, compatible with tools like Design Compiler and IC Compiler.
- **Process Corner**: The `tt0p85v25c` file represents the **Typical-Typical (TT)** corner at 0.85V and 25°C. Other corners (e.g., FF/SS for fast/slow transistors) require separate downloads.
- **Application**: This library is commonly used in academic labs for ASIC flow demonstrations (e.g., synthesis, P&R) but lacks full foundry-certified DRC/LVS rules. For production designs, contact foundries (e.g., SMIC/TSMC) for licensed PDKs.
- EDA Tool:
- Area synthesis tool: Synopsys Design Compiler Version L-2016.03-SP1 for linux64
- RTL functional simulation tool:Chronologic VCS Version L-2016.06_Full64
- Netlist power simulation tool:PrimeTime Version M-2016.12-SP1 for linux64
# **Compressed Accumulative PE Array OS Style** (OPT1-OS)
## **Compressed Accumulative** Process Element (PE)
- RTL path: "OPT1/systolic_array_os/opt1_pe/"
- Synthesis script path: "/OPT1/systolic_array_os/opt1_pe/syn/run.sh"
- PrimeTime power simulation script path: "/OPT1/systolic_array_os/opt1_pe/power/pt.sh"
- RTL functional simulation:"/OPT1/systolic_array_os/opt1_pe/sim"
Execute the following commands to perform PE calculation, functional simulation, and view the waveforms (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_os/opt1_pe/sim
$ make vcs
$ make vd
```

Execute the following commands to perform OPT1-PE synthesis and power simulation with fsdb file(***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_os/opt1_pe/syn
$ sh run.sh
$ cd /OPT1/systolic_array_os/opt1_pe/power
$ sh pt.sh
```
**Comparison of PE levels (MAC .vs OPT1-PE):**
|Freq(MHz)|500|600|666|769|833|870|900|\>910|
| :------------------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | :------------------------------: | :-----------------------------------: |
|MAC Area($um^{2}$)|1481|1666|**Timing VIOLATED**|**Timing VIOLATED**|**Timing VIOLATED**|**Timing VIOLATED**|**Timing VIOLATED**|**Timing VIOLATED**|
|OPT1-PE Area($um^{2}$)|/|/|1446|1482|1609|1668|1780|**Timing VIOLATED**|
***<u>Note: MAC test code in path "/OPT1/systolic_array_os/mac_pe". Area and timing report in path "/OPT1/systolic_array/opt1_pe/syn/outputs" and "/OPT1/systolic_array_os/mac_pe/syn/outputs"</u>***
Next, we evaluate the performance of the array by comparing OPT1-PE with traditional MAC (Multiply-Accumulate) units under OS-style (Output Stationary), WS-style (Weight Stationary), and 3D-Cube architecture-based TensorCore configurations.
Execute the following commands to perform MAC-based systolic array functional simulation. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_os/array_mac_based/sim
$ make vcs
$ make vd
```
***<u>Note: To facilitate result comparison, we have exposed all the result output registers as output port. Please note that in practical OS-style computing array systems, to ensure high area efficiency and meet output bandwidth requirements, the reduced results can either be output through systolic movement across all PEs (add only single adder in one row to fuse sum and carry in OPT1 OS based PE Array) or streamed out via selector-based pipelining after reduction. This flexibility helps minimize output bandwidth and fan-out to improve timing. Adjust the output format in your code according to your system‘s actual requirements!</u>***
You can modify the parameters `M` ,`N` and `K` in the testbench (/OPT1/systolic_array_os/array_mac_based/sim/test_mac_os_array.sv) to implement sub-matrix multiplication.
```undefined
//K can be adjusted arbitrarily in software, while modifying M and N requires changing the array dimension in the TPE.
parameter M = 32;
parameter K = 16;
parameter N = 32;
```
for example set parameters `M=36,N=47` and`K=98`, then begin 100 times random GEMM testing. The following command line output indicates a successful run:
```bash
$ make vcs
SUCCESS: times_a=0, times_b=0, all elements match in matrix_c and tpe_matrix for size A[36,98] * B[98,47] = C[36,47]!
SUCCESS: times_a=1, times_b=0, all elements match in matrix_c and tpe_matrix for size A[36,98] * B[98,47] = C[36,47]!
...
...
SUCCESS: times_a=8, times_b=9, all elements match in matrix_c and tpe_matrix for size A[36,98] * B[98,47] = C[36,47]!
SUCCESS: times_a=9, times_b=9, all elements match in matrix_c and tpe_matrix for size A[36,98] * B[98,47] = C[36,47]!
```
Execute the following commands to perform MAC-based systolic array (OS) synthesis as the baseline. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_os/array_mac_based/syn
$ sh run.sh
```
**MAC-based systolic array (OS) 32 bit accmulator:**
|M $\times$ N|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|
| :-------------------------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: |
|Freq(MHz)|154|167|200|
|Delay(ns)|6.44|**Timing VIOLATED**|**Timing VIOLATED**|
|Area(Total cell area)|376683|/|/|
|Area(Include Net Interconnect area and cell area)|595737|/|/|
***<u>Note: Area and timing report in path "/OPT1/systolic_array_os/array_mac_based/syn/outputs/saed32rvt_tt0p85v25c"</u>***
Execute the following commands to perform OPT1-PE-based systolic array (OS) synthesis and functional simulation. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_os/array_opt1_based/sim
$ make vcs
$ make vd
$ cd /OPT1/systolic_array_os/array_opt1_based/syn
$ sh run.sh
```
**OPT1-PE-based systolic array (OS) 32 bit accmulator:**
|M $\times$ N|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|
| :---------------------------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: |
|Freq(MHz)|200|250|322|333|
|Delay(ns)|4.87|3.94|3.04|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|324494|326586|362483|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|517038|524974|575546|/|
***<u>Note: Area and timing report in path "/OPT1/systolic_array_os/array_opt1_based/syn/outputs/saed32rvt_tt0p85v25c"</u>***
# **Compressed Accumulative PE Array WS Style** (OPT1-WS)
Execute the following commands to perform MAC-based systolic array (WS) synthesis and functional simulation as the baseline. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_ws/array_mac_based/sim
$ make vcs
$ make vd
$ cd /OPT1/systolic_array_ws/array_mac_based/syn
$ sh run.sh
```
**MAC-based systolic array (WS) dynamically bit-width accumulate:**
|M $\times$ N|16 $\times$ 16|16 $\times$ 16|
| :---------------------------------------------------: | :---------------------------------: | :---------------------------------: |
|Freq(MHz)|182|200|
|Delay(ns)|5.44|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|276541|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|415393|/|
***<u>Note: Area and timing report in path "/OPT1/systolic_array_ws/array_mac_based/syn/outputs/saed32rvt_tt0p85v25c"</u>***
Execute the following commands to perform OPT1-PE-based systolic array (WS) synthesis and functional simulation. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/systolic_array_ws/array_mac_based/sim
$ make vcs
$ make vd
$ cd /OPT1/systolic_array_ws/array_mac_based/syn
$ sh run.sh
```
**OPT1-PE-based systolic array (WS) dynamically bit-width accumulate:**
|M $\times$ N|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|16 $\times$ 16|
| :---------------------------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: | :---------------------------------: |
|Freq(MHz)|222|250|286|303|322|
|Delay(ns)|4.43|3.94|3.45|3.25|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|288081|315124|299176|311258|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|474076|522276|507686|524171|/|
***<u>Note: Area and timing report in path "/OPT1/systolic_array_ws/array_opt1_based/syn/outputs/saed32rvt_tt0p85v25c"</u>***
# **Compressed Accumulative PE Array Cube Style** (OPT1-Cube)

Execute the following commands to perform MAC-based 3D-Cube synthesis and functional simulation as the baseline. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/cube/array_mac_based/sim
$ make vcs
$ make vd
$ cd /OPT1/cube/array_mac_based/syn
$ sh run.sh
```
**MAC-based cube:**
|N $\times$ N $\times$ N|8 $\times$ 8 $\times$ 8|8 $\times$ 8 $\times$ 8|8 $\times$ 8 $\times$ 8|
| :---------------------------------------------------: | :----------------------------------: | :----------------------------------: | :----------------------------------: |
|Freq(MHz)|154|159|167|
|Delay(ns)|6.44|6.24|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|494745|498012|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|774395|778476|/|
***<u>Note: Area and timing report in path "/OPT1/cube/array_mac_based/syn/outputs"</u>***
Execute the following commands to perform OPT1-PE-based 3D-Cube synthesis and functional simulation. (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT1/cube/array_opt1_based/sim
$ make vcs
$ make vd
$ cd /OPT1/cube/array_opt1_based/syn
$ sh run.sh
```
**OPT1-PE-based cube:**
|N $\times$ N $\times$ N|8 $\times$ 8 $\times$ 8|8 $\times$ 8 $\times$ 8|
| :---------------------------------------------------: | :----------------------------------: | :----------------------------------: |
|Freq(MHz)|250|286|
|Delay(ns)|3.89|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|524725|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|864067|/|
***<u>Note: Area and timing report in path "/OPT1/cube/array_opt1_based/syn/outputs/saed32rvt_tt0p85v25c"</u>***
# Same Bit-weight **Compressor Array for GEMM** (OPT2)
.")
**Key Notes: EN-T Multiplication Principle reference paper:** [EN-T: Optimizing Tensor Computing Engines Performance via Encoder-Based Methodology | IEEE Conference Publication | IEEE Xplore](https://ieeexplore.ieee.org/abstract/document/10818037)
Execute the following commands to perform GEMM calculation, functional simulation, and view the waveforms (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT2/sim
$ make vcs
$ make vd
```
You can modify the parameters `M` and `N` in the testbench to implement sub-matrix multiplication. The value of `K` is set to 16 by default. To change the value of `K`, adjust the **reduction dimension** in the `TPE`. The value of `N` depends on the number of **PE tiles**. During testing, we generated random numbers and performed matrix multiplication based on standard functions, then compared the results with the computational outputs from the array.
for example set parameters `M=32` and`N=32`,then begin 100 times random GEMM testing. The following command line output indicates a successful run:
```bash
$ make vcs
SUCCESS: times_a=0, times_b=0, all elements match in matrix_c and tpe_matrix for size A[32,16] * B[16,32] = C[32,32]!
SUCCESS: times_a=1, times_b=0, all elements match in matrix_c and tpe_matrix for size A[32,16] * B[16,32] = C[32,32]!
...
...
SUCCESS: times_a=8, times_b=9, all elements match in matrix_c and tpe_matrix for size A[32,16] * B[16,32] = C[32,32]!
SUCCESS: times_a=9, times_b=9, all elements match in matrix_c and tpe_matrix for size A[32,16] * B[16,32] = C[32,32]!
```
for example set parameters `M=167` and`N=7`,then begin 100 times random GEMM testing. The following command line output indicates a successful run:
```bash
$ make vcs
SUCCESS: times_a=0, times_b=0, all elements match in matrix_c and tpe_matrix for size A[167,16] * B[16,8] = C[167,8]!
SUCCESS: times_a=1, times_b=0, all elements match in matrix_c and tpe_matrix for size A[167,16] * B[16,8] = C[167,8]!
...
...
SUCCESS: times_a=8, times_b=9, all elements match in matrix_c and tpe_matrix for size A[167,16] * B[16,8] = C[167,8]!
SUCCESS: times_a=9, times_b=9, all elements match in matrix_c and tpe_matrix for size A[167,16] * B[16,8] = C[167,8]!
```
Execute the following commands to perform OPT2-Array synthesis (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT2/syn/
$ sh run.sh
```
The following are typical configurations for some array sizes:
**OPT2-based mul-tree (WS):**
|K $\times$ N|16 $\times$ 4|16 $\times$ 8|16 $\times$ 16|16 $\times$ 32|
| :---------------------------------------------------: | :--------------------------------: | :--------------------------------: | :---------------------------------: | :---------------------------------: |
|Freq(MHz)|740|740|690|666|
|Delay(ns)|1.30|1.29|1.40|1.44|
|Area(Total cell area)($um^{2}$)|67171|126542|230216|462716|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|85677|165432|311363|648634|
***<u>Note: Area and timing report in path "/OPT2/syn/outputs_array/saed32rvt_tt0p85v25c"</u>***
# **Sparsity Encoding PE-Array (OS-Style) for GEMM** (OPT3 and OPT4C)

First, you need to execute the following commands to run OPT3 PE for performing vector inner products, which helps in understanding the fundamental principles of OPT3 and OPT4 multiplication. In the testbench, you can adjust parameter `K` to modify the reduction dimension size of the vectors. Run the following command to perform a test of 1000 vector inner product calculations: (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT3_OPT4C/pe/sim
$ make vcs
$ make vd
```
for example, set parameters `K=32` then begin 1000 times random (under `normal distribution`input) vector inner products testing. The following command line output indicates a successful run:
```bash
$ make vcs
SUCCESS: times_a=1, elements match in tpe_vector_c and vector_c for size A[1,32] * B[32,1] = C[1,1]!
SUCCESS: times_a=2, elements match in tpe_vector_c and vector_c for size A[1,32] * B[32,1] = C[1,1]!
...
...
SUCCESS: times_a=998, elements match in tpe_vector_c and vector_c for size A[1,32] * B[32,1] = C[1,1]!
SUCCESS: times_a=999, elements match in tpe_vector_c and vector_c for size A[1,32] * B[32,1] = C[1,1]!
SUCCESS: times_a=1000, elements match in tpe_vector_c and vector_c for size A[1,32] * B[32,1] = C[1,1]!
Average cal_cycle for per-operand = 2.05
```
You can modify the following functions in the testbench to adjust the distribution of the generated random numbers, such as parameters like the mean and variance.
```bash
task generate_int8_vector_a_b;
integer i, j;
begin
for (i = 0; i < K; i = i + 1) begin
vector_a[i] = normal_random(0, 20, -128, 127); //Normal distribution(mean,std_dev,min,max)
vector_b[i] = normal_random(0, 20, -128, 127); //Normal distribution(mean,std_dev,min,max)
end
end
endtask
```
Under different variances of the normal distribution, the acceleration effect brought by sparse encoding will vary. This is primarily influenced by the average number of partial products (under INT8)—the smaller this number, the faster the computation speed. In the testbench, we monitor and display the current average number of partial products in real-time, printed in **red font** in the command line.
|K = 32|Mean = 0, Std_dev = 10|Mean = 0, Std_dev = 20|Mean = 0, Std_dev = 30|Mean = 0, Std_dev = 40|Mean = 0, Std_dev = 50|
| :------------------------------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: | :----------------------: |
|Average partial product|1.71|2.05|2.27|2.45|2.57|
|Rate of reduction in computational load(%)|57.25|48.75|43.25|38.75|35.75|
|K \= 128|Mean \= 0, Std\_dev \= 10|Mean \= 0, Std\_dev \= 20|Mean \= 0, Std\_dev \= 30|Mean \= 0, Std\_dev \= 40|Mean \= 0, Std\_dev \= 50|
| :------------------------------------------: | :----------------------------------: | :----------------------------------: | :----------------------------------: | :----------------------------------: | :----------------------------------: |
|Average partial product|1.75|2.10|2.32|2.48|2.60|
|Rate of reduction in computational load(%)|56.25|47.50|42.00|38.00|35.00|

Next, we assemble a fundamental **column array** using these PEs to **perform matrix multiplication operations**. By utilizing **column PEs as primitives**, this architecture enables **scalable expansion of computing power** for larger-scale computational tasks. Run the following command to perform a test of 1000 GEMM calculations: (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT3_OPT4C/array/sim
$ make vcs
$ make vd
```
***<u>Note: To facilitate result comparison, we have exposed all the result output registers as output port. Please note that in practical OS-style computing array systems, to ensure high area efficiency and meet output bandwidth requirements, the reduced results can either be output through systolic movement across all PEs and add only single adder in one row to fuse sum and carry or streamed out via selector-based pipelining after reduction. This flexibility helps minimize output bandwidth and fan-out to improve timing. Adjust the output format in your code according to your system‘s actual requirements!</u>***
In the testbench, parameters **M** and **K** are **software-configurable dimensions** that can be adjusted dynamically via software (e.g., through instructions or controller configurations). In contrast, parameter **N** is a **hardware dimension**—modifying **N** requires corresponding changes to the hardware architecture (e.g., altering the number of column PEs). for example, set parameters `M=32,K=32,N=32` then begin 1000 times random (under `normal distribution`input) GEMM testing.
```bash
$ make vcs
SUCCESS: times_a=1, all elements match in matrix_c and tpe_matrix for size A[32,32] * B[32,32] = C[32,32]!
SUCCESS: times_a=2, all elements match in matrix_c and tpe_matrix for size A[32,32] * B[32,32] = C[32,32]!
...
...
SUCCESS: times_a=998, all elements match in matrix_c and tpe_matrix for size A[32,32] * B[32,32] = C[32,32]!
SUCCESS: times_a=999, all elements match in matrix_c and tpe_matrix for size A[32,32] * B[32,32] = C[32,32]!
SUCCESS: times_a=1000, all elements match in matrix_c and tpe_matrix for size A[32,32] * B[32,32] = C[32,32]!
Average cal_cycle for per-operand = 2.28
```
Execute the following commands to perform OPT4C single column PE array synthesis (***<u>Note: Replace the working paths in both the scripts and filelist with your personal directory</u>***):
```bash
$ cd /OPT3_OPT4C/array/syn
$ sh run.sh
```
The following are typical configurations for some frequency in same column size:
| N|32|32|32|32|32|32|
| :---------------------------------------------------: | :-----------------------------: | :-----------------------------: | :-----------------------------: | :-----------------------------: | :-----: | :----: |
|Freq(MHz)|714|1000|1250|1666|1694|1720|
|Delay(ns)|1.30|0.95|0.74|0.55|0.54|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|23670|26548|29914|30690|30877|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|31861|35820|39558|40638|40865|/|
|N|16|16|16|16|
| :---------------------------------------------------: | :-----: | :-----: | :-----: | :----: |
|Freq(MHz)|714|1000|1724|1754|
|Delay(ns)|1.30|0.95|0.53|**Timing VIOLATED**|
|Area(Total cell area)($um^{2}$)|11788|12955|15854|/|
|Area(Include Net Interconnect area and cell area)($um^{2}$)|15118|16545|19913|/|
***<u>Note: Area and timing report in path "/OPT3_OPT4C/array/syn/outputs_array/saed32rvt_tt0p85v25c"</u>***
OPT4E is an extended K-dimensional version of OPT4C, which can reduce the area proportion of registers in the PE array and further improve area efficiency. Readers can reproduce it based on the previous code by themselves. If there are any technical questions, they can contact the author at any time for discussion.
If you find this code helpful, you may cite the following references in your paper. Thank you very much.
```undefined
@inproceedings{wu2024t,
title={EN-T: Optimizing Tensor Computing Engines Performance via Encoder-Based Methodology},
author={Wu, Qizhe and Gui, Yuchen and Zeng, Zhichen and Wang, Xiaotian and Liang, Huawen and Jin, Xi},
booktitle={2024 IEEE 42nd International Conference on Computer Design (ICCD)},
pages={608--615},
year={2024},
organization={IEEE}
}
@inproceedings{wu2025exploring,
title={Exploring the Performance Improvement of Tensor Processing Engines through Transformation in the Bit-weight Dimension of MACs},
author={Wu, Qizhe and Liang, Huawen and Gui, Yuchen and Zeng, Zhichen and He, Zerong and Tao, Linfeng and Wang, Xiaotian and Zhao, Letian and Zeng, Zhaoxi and Yuan, Wei and others},
booktitle={2025 IEEE International Symposium on High Performance Computer Architecture (HPCA)},
pages={685--700},
year={2025},
organization={IEEE}
}
```
|
https://github.com/Certora/CertoraProver
|
CertoraProver
The Certora Prover is the state-of-the-art security tool for automated formal verification of smart contracts running on EVM-based chains, Solana and Stellar
Languages: Kotlin (86.6%), Python (5.6%), WebAssembly (4.6%), Solidity (1.6%), Rust (0.4%), Move (0.4%)
.circleci
.circleci
.github
.github
.vscode
.vscode
ASTExtraction/src/main
ASTExtraction/src/main
Public
Public
...
.editorconfig
.editorconfig
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE
LICENSE
> README.md
<div align="center">
[](https://gitmcp.io/Certora/CertoraProver)
[](https://x.com/certorainc)
</div>
# Certora Prover
The Certora Prover is a tool for formally verifying smart contracts.
This document is intended for those who would like to contribute to the tool.
If you are interested to use the tool on our cloud platform without having to locally build it,
we recommend following the documentation here: https://docs.certora.com/en/latest/docs/user-guide/install.html.
The instructions here are for users on Mac OS and Linux.
## Dependencies
* JDK 19+
* SMT solvers:
* [required] Z3 -- https://github.com/Z3Prover/z3/releases
* [required] CVC5 -- https://github.com/cvc5/cvc5/releases
* [optional] CVC4 -- https://cvc4.github.io/downloads.html
* [optional] Yices -- https://github.com/SRI-CSL/yices2/releases
* [optional] Bitwuzla -- https://github.com/bitwuzla/bitwuzla/releases
* _NOTE_ Whichever solvers you decide to install, remember to put the executables in a directory in your system's `PATH`.
* Python 3
- We recommend downloading from here: https://www.python.org/downloads/
- Make sure the version of pip matches with the python version
* Solidity compiler -- https://github.com/ethereum/solidity/releases.
Pick the version(s) that is used by the contracts you want to verify.
Since we often use many versions, it is recommended to rename each `solc` executable
to, e.g., solc5.12, and place all versions into a directory in your systems `PATH` like so: `export PATH="/path/to/dir/with/executables:$PATH"`
* Rust (tested on Version 1.81.0+) -- https://www.rust-lang.org/tools/install
* [`llvm-symbolizer`](https://llvm.org/docs/CommandGuide/llvm-symbolizer.html) and [`llvm-dwarfdump`](https://llvm.org/docs/CommandGuide/llvm-dwarfdump.html),
which are installed as part of LLVM.
* [`rustfilt`](https://github.com/luser/rustfilt)
## Optional Dependencies:
* [`Graphviz`](https://graphviz.org/download/):
Graphviz is an optional dependency required for rendering visual elements, `dot` in particular.
If not installed, some features may not work properly, such as [Tac Reports](https://docs.certora.com/en/latest/docs/prover/diagnosis/index.html#tac-reports).
_NOTE_ Remember to put `dot` in your system's `PATH`, by running:
```
export PATH="/usr/local/bin:$PATH".
```
* (Replace /usr/local/bin with the actual path where dot is installed.)
## Installation
* Create a directory anywhere to store build outputs.
- Add an environment variable `CERTORA` whose value is the path to this directory.
- Add this directory to `PATH` as well. For example if you are using a bash shell, you can edit your `~/.bashrc` file like so:
```
export CERTORA="preferred/path/for/storing/build/outputs"
export PATH="$CERTORA:$PATH"
```
* `cd` into a directory you want to store the CertoraProver source and clone the repo:
```
git clone --recurse-submodules https://github.com/Certora/CertoraProver.git
```
* Compile the code by running: `./gradlew assemble`
* If you want to clean up all artifacts of the project, run: `./gradlew clean`
* Make sure the path you used to set the variable `CERTORA` has important jars, scripts, and binaries like `emv.jar`, `certoraRun.py`, `tac_optimizer`.
### Troubleshooting
- We recommend working from within a python virtual environment and installing all dependencies there:
```commandline
cd CertoraProver
python -m venv .venv
source .venv/bin/activate
pip install -r scripts/certora_cli_requirements.txt
```
- If you have `Crypto` installed, you may first need to uninstall (`pip uninstall crypto`) before installing `pycryptodome`
- You can make sure `tac_optimizer` builds correctly by `cd`ing in to the `fried-egg` directory and running `cargo build --release`. Also make sure `tac_optimizer` is in your path (set using `CERTORA`).
## Running
- You can run the tool by running `certoraRun.py -h` to see all the options.
- There are several small examples for testing under `Public/TestEVM`. For example, you can run one of these like so:
```commandline
cd Public/TestEVM/CVLCompilation/OptionalFunction
certoraRun.py Default.conf
```
- Please refer to the user guide for details on how to run the prover on real-world smart contracts: https://docs.certora.com/en/latest/docs/user-guide/index.html
- You can run unit tests directly from IDEs like IntelliJ, or from the command line with `./gradlew test --tests <name_of_test_with_wildcards>`
- These tests are in `CertoraProver/src/test` (and also in the test directories of the various subprojects)
## Contributing
1. Fork the repo and open a pull request with your changes.
2. Contact Certora at devhelp@certora.com once your PR is ready.
3. Certora will assign a dev representative who will review and test the changes, and provide feedback directly in the PR.
4. Once the feature is approved and ready to be merged, Certora will merge it through its internal process and include the feature in a subsequent Prover release.
## LICENSE
Copyright (C) 2025 Certora Ltd. The Certora Prover is released under the GNU General Public License, Version 3, as published by the Free Software Foundation. For more information, see the file LICENSE.
|
https://github.com/KByrski/RaySplatting
|
RaySplatting
Languages: Cuda (79.0%), C++ (21.0%)
RaySplats
RaySplats
assets
assets
...
README.md
README.md
config.txt
config.txt
> README.md
# RaySplats: Ray Tracing based Gaussian Splatting
Krzysztof Byrski, Marcin Mazur, Jacek Tabor, Tadeusz Dziarmaga, Marcin Kądziołka, Dawid Baran, Przemysław Spurek <br>
| arXiv |
| :---- |
| RaySplats: Ray Tracing based Gaussian Splatting [https://arxiv.org/pdf/2501.19196.pdf](http://arxiv.org/abs/2501.19196)|
<img src=assets/gif1.gif height="300" class="center">
<br>
<table align="center" cellspacing="0" cellpadding="0">
<tr class="center">
<td><img src=assets/screenshot1.png height="200" width="300" class="center"></td>
<td><img src=assets/screenshot92.png height="200" width="300" class="center"></td>
<td><img src=assets/screenshot10.png height="200" width="300" class="center"> </td>
</tr>
</tr class="center">
<tr class="center">
<td><img src=assets/screenshot7.png height="200" width="300" ></td>
<td><img src=assets/screenshot82.png height="200" width="300" ></td>
<td><img src=assets/screenshot4.png height="200" width="300" class="center"> </td>
</tr>
</table>
# Features
- Spherical harmonics support up to the degree **4**.
- Interactive Windows viewer / optimizer application allowing to preview the trained model state in the real time.
- Support for the **PLY** trained model output format.
- Highly efficient pure Gaussian renderer (no embedding primitive mesh approximation).
- Highly configurable optimizer based on the convenient text configuration file.
- Support for both the **Blender** and **COLMAP** data sets (after some preprocessing by the 3DGS).
- Built-in evaluation of the model and visualization to the *.bmp file with the configurable frequency.
# Controls in the interactive Windows viewer / optimizer application
<img src="assets/app_main_window.png">
- **Double Left Click**: Toggle between the **static camera** and the **free roam** mode.
- **Mouse Movement**: Rotate the camera in the **free roam** mode.
- **W / S**: Move forward / backward.
- **A / D**: Step left / right.
- **Spacebar / C**: Move up / down.
- **[ / ]**: Switch the camera to the previous / next training pose.
- **Print Screen**: Make screenshot and save it to the 24-bit *.bmp file.
# Prerequisites:
- Visual Studio 2019 Enterprise;
- CUDA Toolkit 12.4.1;
- NVIDIA OptiX SDK 8.0.0;
# Building the interactive Windows viewer / optimizer application
- Create the new Windows Desktop Application project and name it "RaySplats";
- Remove the newly generated RaySplats.cpp file containing the code template;
- In **Build Dependencies** -> **Build Customizations...** select the checkbox matching your installed CUDA version. On our test system, we had to select the following checkbox:
**CUDA 12.4(.targets, .props)**
- Add all the files from the directory "RaySplats" to the project;
- In the project's Properties set **Configuration** to **"Release"** and **Platform** to **"x64"**;
- In **Properties** -> **Configuration Properties** -> **CUDA C/C++** -> **Common** -> **Generate Relocatable Device Code** select **Yes (-rdc=true)**;
- For file "shaders.cuh" in **Properties** -> **Configuration Properties** -> **General** -> **Item Type** select **"CUDA C/C++**;
- For files: "shaders.cuh", "shaders_SH0.cu", "shaders_SH1.cu", "shaders_SH2.cu", "shaders_SH3.cu" and "shaders_SH4.cu" in **Properties** -> **Configuration Properties** -> **CUDA C/C++** -> **Common**:
- Change the suffix of **Compiler Output (obj/cubin)** from **".obj"** to **".ptx"**;
- In **Generate Relocatable Device Code** select **No**;
- In **NVCC Compilation Type** select **Generate device-only .ptx file (-ptx)"**;
- In **Properties** -> **Configuration Properties** -> **VC++ Directories** -> **Include Directories** add OptiX "include" directory path. On our test system, we had to add the following path:
```plaintext
"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include"
```
- In **Properties** -> **Configuration Properties** -> **CUDA C/C++** -> **Device** -> **Code Generation** type the compute capability and microarchitecture version of your GPU. On our test system with RTX 4070 GPU we typed:
```plaintext
"compute_89,sm_89"
```
- In **Properties** -> **Configuration Properties** -> **Linker** -> **Input** -> **Additional Dependencies** add three new lines containing:
```plaintext
"cuda.lib"
```
```plaintext
"cudart.lib"
```
```plaintext
"cufft.lib"
```
- In each of two different blocks of code in file InitializeOptiXRenderer.cu:
```plaintext
if constexpr (SH_degree == 0) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH0.cu.ptx", "rb");
else if constexpr (SH_degree == 1) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH1.cu.ptx", "rb");
else if constexpr (SH_degree == 2) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH2.cu.ptx", "rb");
else if constexpr (SH_degree == 3) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH3.cu.ptx", "rb");
else if constexpr (SH_degree == 4) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH4.cu.ptx", "rb");
```
and
```plaintext
if constexpr (SH_degree == 0) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH0.cu.ptx", "rt");
else if constexpr (SH_degree == 1) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH1.cu.ptx", "rt");
else if constexpr (SH_degree == 2) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH2.cu.ptx", "rt");
else if constexpr (SH_degree == 3) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH3.cu.ptx", "rt");
else if constexpr (SH_degree == 4) f = fopen("C:/Users/pc/source/repos/RaySplats/RaySplats/x64/Release/shaders_SH4.cu.ptx", "rt");
```
replace the provided path with the path to the *.ptx compiled shaders files on your hdd.
# Training your first model (Blender dataset):
- Train the model with 3DGS for some small number of iterations (for example 100) on some Blender dataset (for example: "lego" from "NeRF synthetic" set);
- Convert all of the files in the subdirectories: "train" and "test" located in the dataset main directory to 24-bit *.bmp file format without changing their names;
- Copy the configuration file "config.txt" to the project's main directory. On our test system we copied it to the following directory:
```plaintext
"C:\Users\<Windows username>\source\repos\RaySplats\RaySplats"
```
- In lines: 4 and 5 of the configuration file specify the location of the dataset main directory and the output 3DGS *.ply file obtained after short model pretraining (**Important!** The spherical harmonics degree used for pretraining and the target one specified in the line 7 of the config file don't have to match);
- In lines: 13-15 of the configuration file specify the background color that matches the background color used for pretraining using the following formula:
R' = (R + 0.5) / 256<br>
G' = (G + 0.5) / 256<br>
B' = (B + 0.5) / 256<br>
where R, G and B are the integer non-negative background color coordinates in the range 0-255.
- Run the "RaySplats" project from the Visual Studio IDE;
# RaySplatting Viewer


This is a lightweight and user-friendly viewer for visualizing **RaySplatting** with additional user-loaded objects that support ray tracing. The viewer allows seamless integration of **OBJ** and **PLY (ASCII format)** files into the scene.
The current material system is optimized for models designed to be **reflective** or **glass-like**, making it ideal for rendering high-quality visuals with realistic light interactions.
## System Requirements
To use this viewer, ensure your system meets the following requirements:
- **Operating System**: Windows
- **GPU**: NVIDIA RTX 20xx series or higher (**RTX 30xx+ recommended**)
- **CUDA Version**: 12.4 or later
- **Required DLLs**: Place the following files in the directory:
```plaintext
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin
```
- `cudart64_12.dll`
- `cufft64_11.dll`
## Installation & Usage
1. Download the provided **ZIP file**. [Download ZIP](https://drive.google.com/file/d/1XPivZb6-dVtuwQ3T9UrxOF2gTTnerGhp/view?usp=sharing)
2. Extract the contents.
3. Run the **exe file**—no additional setup required!
4. Modify mesh properties in **mesh_config.txt**.
5. Change the base scene by editing the **PLY file path** in `config.txt`.
## Controls
- Exactly the same as in the interactive Windows viewer / optimizer application.
## Future Features
We are actively developing new features, including:
- **Enhanced mesh transformations** (scaling, rotation, position editing beyond `mesh_config.txt`)
- **Screenshot capture** for rendered scenes
- **View presets** to allow seamless switching between different perspectives
- **And much more!**
Stay tuned for updates and improvements!
|
https://github.com/patricktrainer/duckdb-doom
|
duckdb-doom
A Doom-like game using DuckDB
Languages: HTML (100.0%)
.github/workflows
.github/workflows
docs
docs
...
.gitignore
.gitignore
CONTRIBUTING.md
CONTRIBUTING.md
GETTING_STARTED.md
GETTING_STARTED.md
LICENSE
LICENSE
README.md
README.md
> README.md
# DuckDB-DOOM
A 3D first-person shooter game implemented entirely in SQL using DuckDB-WASM.

## Overview
DuckDB-DOOM is an experimental game that demonstrates the power of SQL for computational tasks. The entire game logic, including 3D raycasting, enemy AI, collision detection, and rendering is implemented using SQL queries running in DuckDB's WebAssembly build.
## Features
- True first-person 3D rendering using raycasting techniques
- Collision detection
- Enemy NPCs
- Shooting mechanics with bullets
- Minimap display
- All game logic implemented in SQL
## How to Play
1. Open `index.html` in a modern web browser
2. Use WASD keys to move:
- W: Move forward
- S: Move backward
- A: Turn left
- D: Turn right
3. Spacebar: Shoot
4. L: Toggle verbose logging (to browser console)
## Technology
This project uses:
- [DuckDB-WASM](https://github.com/duckdb/duckdb-wasm): SQL database that runs in the browser
- Pure HTML/JavaScript for the UI
- SQL for all game mechanics and rendering
## How It Works
The game uses SQL in interesting ways:
1. 3D Rendering: Uses recursive CTEs to implement raycasting
2. Game state: Stored in tables (player, enemies, bullets, map)
3. Physics: SQL queries handle collision detection and movement
4. Rendering: Views transform the 3D world state into ASCII art
## Installation
No installation needed! Just clone the repository and open `index.html` in a web browser:
```bash
git clone https://github.com/patricktrainer/duckdb-doom.git
cd duckdb-doom
# Open index.html in your browser
```
## Contributing
Contributions are welcome! Some ideas for improvements:
- Add textures to walls
- Improve enemy AI features
- Add sound effects
- Create additional levels
## License
MIT License - see [LICENSE](LICENSE) for details.
## Acknowledgments
- Inspired by the original DOOM game
- Thanks to the DuckDB team for their amazing WebAssembly build
|
https://github.com/davidtos/JUring
|
JUring
JUring provides Java bindings for io_uring
Languages: Java (99.6%), Dockerfile (0.4%)
.devcontainer
.devcontainer
.vscode
.vscode
src
src
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
pom.xml
pom.xml
> README.md
# JUring: File I/O for Java using IO_uring
JUring is a Java library that provides bindings to Linux's io_uring asynchronous I/O interface using Java's Foreign Function
& Memory API. JUring can deliver significant performance improvements over standard Java FileChannel operations, particularly for high-throughput file I/O workloads.
## Performance
### Key Performance Highlights
**JUring with registered files provides the following performance:**
- **Up to 426% faster** than pre-opened FileChannels at 4KB buffer sizes for reads
- **29% faster** at 512-byte operations, handling over 22,000 operations per millisecond
- Write performance matching or exceeding FileChannel performance across buffer sizes
- Scalability across multiple concurrent threads (1-25 threads tested)
### Benchmark Results
All benchmarks were conducted on a Linux machine using JMH (Java Microbenchmark Harness) with 2,211 operations per test invocation.
#### Read Performance: Optimized File Access (25 threads)
Comparing registered files vs pre-opened FileChannels:
| Buffer Size | Registered Files (ops/ms) | Pre-opened FileChannels (ops/ms) | **Improvement** |
|-------------|---------------------------|----------------------------------|-----------------|
| 512 bytes | 22,332 | 17,277 | **+29%** |
| 4KB | 11,777 | 2,239 | **+426%** |
| 16KB | 631 | 554 | **+14%** |
| 64KB | 133 | 129 | **+3%** |
#### Read Performance: JUring vs FileChannel Operations (25 threads)
Comparing different I/O approaches with open/read/close patterns:
| Buffer Size | JUring Open/Read/Close (ops/ms) | FileChannel Open/Read/Close (ops/ms) | **Improvement** |
|-------------|--------------------------------|--------------------------------------|-----------------|
| 512 bytes | 1,252 | 968 | **+29%** |
| 4KB | 1,268 | 855 | **+48%** |
| 16KB | 563 | 445 | **+27%** |
| 64KB | 141 | 125 | **+13%** |
The goal of this benchmark is to open a file, read the given buffer size, and close it again. Opening and closing the files
is the heavy part of this benchmark.
#### Read Performance: Blocking I/O with Virtual Threads (25 threads)
Comparing JUring blocking vs FileChannel with Virtual Threads:
| Buffer Size | JUring Blocking + VThreads (ops/ms) | FileChannel + VThreads (ops/ms) | **Improvement** |
|-------------|-------------------------------------|----------------------------------|-----------------|
| 512 bytes | 1,051 | 923 | **+14%** |
| 4KB | 1,029 | 710 | **+45%** |
| 16KB | 788 | 350 | **+125%** |
| 64KB | 286 | 120 | **+138%** |
#### Write Performance Scaling
JUring registered files vs pre-opened FileChannels across different thread counts:
**Single Thread:**
| Buffer Size | JUring (ops/ms) | FileChannel (ops/ms) | **Improvement** |
|-------------|-----------------|----------------------|-----------------|
| 512 bytes | 891 | 400 | **+123%** |
| 4KB | 860 | 260 | **+231%** |
| 16KB | 498 | 144 | **+246%** |
| 64KB | 151 | 53 | **+185%** |
**8 Threads:**
| Buffer Size | JUring (ops/ms) | FileChannel (ops/ms) | **Improvement** |
|-------------|-----------------|----------------------|-----------------|
| 512 bytes | 4,292 | 2,429 | **+77%** |
| 4KB | 3,013 | 1,724 | **+75%** |
| 16KB | 1,189 | 895 | **+33%** |
| 64KB | 286 | 294 | **-3%** |
**20 Threads:**
| Buffer Size | JUring (ops/ms) | FileChannel (ops/ms) | **Improvement** |
|-------------|-----------------|----------------------|-----------------|
| 512 bytes | 5,200 | 5,204 | **0%** |
| 4KB | 3,381 | 3,440 | **-2%** |
| 16KB | 1,211 | 1,449 | **-16%** |
| 64KB | 233 | 346 | **-33%** |
### When to Use JUring
**JUring excels in scenarios with:**
- High-throughput file I/O operations (thousands of ops/ms)
- Applications that can pre-register files for optimal performance
- Workloads with small to medium buffer sizes (512B - 16KB)
- Single-threaded or lightly-threaded write operations
- Mixed read/write workloads where read performance is critical
**Consider standard FileChannel for:**
- High-concurrency write operations (20+ threads)
- Large buffer sizes (64KB+) with many concurrent writers
- Applications requiring broad platform compatibility
- Occasional file operations
- Simplicity (Working on this!)
## Benchmark Methodology
The benchmarks use JMH (Java Microbenchmark Harness) with the following configuration:
- **Operations per test**: 2,211 operations per invocation, each thread has to process a given list of files and offsets
- **Queue depth**: 256 inflight requests
- **Access pattern**: Random offsets within files
- **Thread counts**: 1, 8, 20, and 25 concurrent threads (varies by test)
- **Buffer sizes**: 512 bytes, 4KB, 16KB, 64KB
- **Warmup**: Ring initialization performed outside benchmark timing
### Benchmark Categories
- **`registeredFiles`**: io_uring with pre-registered file descriptors (optimal performance)
- **`preOpenedFileChannels`**: FileChannel with pre-opened file handles
- **`juringOpenReadClose`**: JUring with full open/read/close cycle
- **`fileChannelOpenReadClose`**: FileChannel with full open/read/close cycle
- **`juringBlockingWithVirtualThreads`**: JUring blocking API with Virtual Threads
- **`fileChannelOpenReadCloseOnVirtualThreads`**: FileChannel with Virtual Threads
For complete benchmark source code and detailed methodology, see the test files in the repository `src/test/java/bench/random`.
## Requirements
- Linux kernel 5.1 or higher
- liburing installed
- Java 22 or higher (for Foreign Function & Memory API)
## Current Limitations and Future Improvements
### Points of interest
- **Read operations**: JUring shows consistent advantages, especially with registered files
- **Write operations**: Performance advantages diminish at high concurrency (20+ threads)
- **Sweet spot**: 4KB buffer size shows the most dramatic improvements for reads
- **Scaling**: JUring shows better scaling characteristics for single-threaded operations
### Known Limitations
- **Initialization overhead**: Creating JUring instances takes a few milliseconds
- **Platform dependency**: Linux-only due to io_uring requirement
- **High concurrency writes**: FileChannel may perform better with many concurrent writers
### Planned Improvements
- Ring pooling for reduced initialization costs
- Write performance optimization for high-concurrency scenarios
- Additional io_uring features (file modes, flags, sockets)
- Enhanced blocking API performance
- Improved memory cleanup strategies
## Creating the benchmark files
If you want to run the benchmark yourself, you can use the following:
```shell
seq 1 2211 | xargs -P 8 -I {} bash -c 'yes "{} " | head -c 5242880 > "file_{}.bin"'
```
---
*Note: Benchmark results show that JUring's advantages are most pronounced for read operations and single-threaded scenarios. For write-heavy workloads with high concurrency, evaluate both approaches based on your specific use case.*
# The Read benchmarks:
Local file performance @ 25 threads:
```text
Benchmark (bufferSize) Mode Cnt Score Error Units
RandomReadBenchMark.juringBlockingWithVirtualThreads 512 thrpt 5 1050.689 ± 2.313 ops/ms
RandomReadBenchMark.juringBlockingWithVirtualThreads 4096 thrpt 5 1028.819 ± 1.627 ops/ms
RandomReadBenchMark.juringBlockingWithVirtualThreads 16386 thrpt 5 787.902 ± 3.424 ops/ms
RandomReadBenchMark.juringBlockingWithVirtualThreads 65536 thrpt 5 286.451 ± 2.304 ops/ms
RandomReadBenchMark.fileChannelOpenReadCloseOnVirtualThreads 512 thrpt 5 923.494 ± 11.217 ops/ms
RandomReadBenchMark.fileChannelOpenReadCloseOnVirtualThreads 4096 thrpt 5 710.151 ± 3.830 ops/ms
RandomReadBenchMark.fileChannelOpenReadCloseOnVirtualThreads 16386 thrpt 5 350.201 ± 1.265 ops/ms
RandomReadBenchMark.fileChannelOpenReadCloseOnVirtualThreads 65536 thrpt 5 120.250 ± 0.845 ops/ms
RandomReadBenchMark.juringOpenReadClose 512 thrpt 5 1252.103 ± 72.777 ops/ms
RandomReadBenchMark.juringOpenReadClose 4096 thrpt 5 1267.618 ± 61.142 ops/ms
RandomReadBenchMark.juringOpenReadClose 16386 thrpt 5 562.698 ± 25.074 ops/ms
RandomReadBenchMark.juringOpenReadClose 65536 thrpt 5 141.287 ± 17.662 ops/ms
RandomReadBenchMark.fileChannelOpenReadClose 512 thrpt 5 968.433 ± 7.388 ops/ms
RandomReadBenchMark.fileChannelOpenReadClose 4096 thrpt 5 854.720 ± 11.367 ops/ms
RandomReadBenchMark.fileChannelOpenReadClose 16386 thrpt 5 445.172 ± 11.166 ops/ms
RandomReadBenchMark.fileChannelOpenReadClose 65536 thrpt 5 124.710 ± 2.004 ops/ms
```
Performance @ 25 threads
```text
Benchmark (bufferSize) Mode Cnt Score Error Units
RandomReadBenchMark.preOpenedFileChannels 512 thrpt 5 17276.679 ± 203.531 ops/ms
RandomReadBenchMark.preOpenedFileChannels 4096 thrpt 5 2238.837 ± 70.137 ops/ms
RandomReadBenchMark.preOpenedFileChannels 16386 thrpt 5 554.172 ± 19.729 ops/ms
RandomReadBenchMark.preOpenedFileChannels 65536 thrpt 5 129.320 ± 2.716 ops/ms
RandomReadBenchMark.registeredFiles 512 thrpt 5 22331.600 ± 400.126 ops/ms
RandomReadBenchMark.registeredFiles 4096 thrpt 5 11777.366 ± 763.342 ops/ms
RandomReadBenchMark.registeredFiles 16386 thrpt 5 631.134 ± 45.910 ops/ms
RandomReadBenchMark.registeredFiles 65536 thrpt 5 132.891 ± 15.717 ops/ms
```
# The Write benchmarks:
1 thread
```text
Benchmark (bufferSize) Mode Cnt Score Error Units
RandomWriteBenchmark.preOpenedFileChannels 512 thrpt 5 400.075 ± 17.247 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 4096 thrpt 5 260.327 ± 5.694 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 16386 thrpt 5 143.749 ± 1.424 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 65536 thrpt 5 53.066 ± 1.149 ops/ms
RandomWriteBenchmark.registeredFiles 512 thrpt 5 891.473 ± 96.506 ops/ms
RandomWriteBenchmark.registeredFiles 4096 thrpt 5 860.157 ± 35.019 ops/ms
RandomWriteBenchmark.registeredFiles 16386 thrpt 5 497.574 ± 3.014 ops/ms
RandomWriteBenchmark.registeredFiles 65536 thrpt 5 150.941 ± 18.614 ops/ms
```
8 threads
```text
Benchmark (bufferSize) Mode Cnt Score Error Units
RandomWriteBenchmark.preOpenedFileChannels 512 thrpt 5 2428.613 ± 57.373 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 4096 thrpt 5 1723.750 ± 47.703 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 16386 thrpt 5 894.529 ± 21.969 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 65536 thrpt 5 294.078 ± 16.229 ops/ms
RandomWriteBenchmark.registeredFiles 512 thrpt 5 4291.695 ± 34.726 ops/ms
RandomWriteBenchmark.registeredFiles 4096 thrpt 5 3013.474 ± 43.673 ops/ms
RandomWriteBenchmark.registeredFiles 16386 thrpt 5 1189.466 ± 6.460 ops/ms
RandomWriteBenchmark.registeredFiles 65536 thrpt 5 285.783 ± 30.037 ops/ms
```
20 threads
```text
Benchmark (bufferSize) Mode Cnt Score Error Units
RandomWriteBenchmark.preOpenedFileChannels 512 thrpt 5 5204.042 ± 65.680 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 4096 thrpt 5 3440.433 ± 89.458 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 16386 thrpt 5 1449.132 ± 111.456 ops/ms
RandomWriteBenchmark.preOpenedFileChannels 65536 thrpt 5 346.176 ± 17.737 ops/ms
RandomWriteBenchmark.registeredFiles 512 thrpt 5 5200.068 ± 128.891 ops/ms
RandomWriteBenchmark.registeredFiles 4096 thrpt 5 3380.841 ± 5.979 ops/ms
RandomWriteBenchmark.registeredFiles 16386 thrpt 5 1211.093 ± 10.345 ops/ms
RandomWriteBenchmark.registeredFiles 65536 thrpt 5 232.730 ± 17.184 ops/ms
```
|
https://github.com/LayerZero-Labs/qmdb
|
qmdb
Quick Merkle Database
Languages: Rust (98.2%), Shell (1.2%)
.devcontainer
.devcontainer
.github
.github
bench
bench
docs
docs
hpfile
hpfile
...
.editorconfig
.editorconfig
.gitignore
.gitignore
CONTRIBUTING.md
CONTRIBUTING.md
Cargo.toml
Cargo.toml
LICENSE-APACHE
LICENSE-APACHE
> README.md
# QMDB: Quick Merkle Database


## Overview
The Quick Merkle Database (QMDB) is a high-performance verifiable key-value store, designed to optimize blockchain state storage.
It is designed to take advantage of modern SSDs and minimize flash write amplification with an append-only design.
QMDB can perform in-memory Merklelization with minimal DRAM usage, and offers efficient cryptographic proofs for inclusion, exclusion, and historical states.
Read the QMDB paper here: <https://arxiv.org/pdf/2501.05262>
*QMDB is ongoing research. Designed for high performance and practical use, some features are still evolving. We invite feedback and contributions from the community.*
## Use Cases
- **Blockchain State Storage**: Ideal for maintaining verifiable state in decentralized systems.
- **Database Optimization**: Useful for any application requiring high-performance verifiable key-value storage.
## Features
- **SSD-Optimized Design**
Reduces flash write amplification by storing updates as append-only twigs.
- **In-Memory Merkleization**
Minimizes disk I/O for proofs and updates, requiring only a small DRAM footprint.
- **Low I/O Overhead**
Achieves O(1) I/O per update and just one SSD read per state access.
- **High Throughput**
Demonstrated 6× gains over RocksDB and 8× over state-of-the-art verifiable databases.
- **Scalable Architecture**
Validated on datasets up to 15 billion entries, with projections up to 280 billion entries on a single machine.
- **Broad Hardware Compatibility**
Runs effectively on both consumer-grade PCs and enterprise servers, lowering barriers to blockchain participation.
## Key data structures
- **Entry** ([`qmdb/src/entryfile/entry.rs`](qmdb/src/entryfile/entry.rs)): The primitive data structure in QMDB, with each Entry corresponding to a single key-value pair.
- **Twigs** ([`qmdb/src/merkletree/twig.rs`](qmdb/src/merkletree/twig.rs)): A compact and efficient representation of the Merkle tree, minimizing DRAM usage by keeping most data on SSD.
## Installation
To get started, clone the repository:
```bash
git clone https://github.com/LayerZero-Labs/qmdb
cd qmdb
```
The following pre-requisites are required to build QMDB:
- g++
- linux-libc-dev
- libclang-dev
- unzip
- libjemalloc-dev
- make
We provide a script to install the pre-requisites on Ubuntu:
```bash
./install-prereqs-ubuntu.sh
```
Build the project using Cargo:
```bash
cargo build --release
```
Run a quick benchmark:
```bash
head -c 10M </dev/urandom > randsrc.dat
cargo run --bin speed -- --entry-count 4000000
```
Run unit tests:
```bash
cargo nextest run
```
## Getting started
We include a simple example in [`examples/v2_demo.rs`](qmdb/examples/v2_demo.rs) to create a QMDB instance and interact with the database. You can run it as follows:
```bash
cargo run --example v2_demo
```
## Directory Structure
- **`qmdb/src/`**: Main QMDB source code
- **`examples/`**: Example projects demonstrating QMDB usage.
- **`tests/`**: Unit tests.
- **`entryfile/`**: Implements the `Entry` data structure
- **`merkletree/`**: Contains `Twigs` (Merkle subtrees ordered by insertion time and not key)
- **`indexer/`**: In-memory indexer to map keys to QMDB entries
- **`indexer/hybrid/`**: Hybrid indexer that is optimized for SSD
- **`stateless/`**: Build a in-memory subset of world state for stateless validation
- **`seqads/`**: Sequential ADS, used to generate input data for stateless validation
- **`tasks/`**: The Create/Update/Delete requests to QMDB must be encapsulated into ordered tasks
- **`utils/`**: Miscellaneous utility and helper functions.
- **`bench/`**: Benchmarking utility.
- **`hpfile/`**: Head-prunable file: HPfile are a series of fixed-size files in QMDB that simulate a single large file, enabling efficient pruning from the front.
## Contributing
See [CONTRIBUTING.md](CONTRIBUTING.md) for more information on how to contribute to QMDB.
## Any questions?
[Please raise a GitHub issue](https://github.com/LayerZero-Labs/qmdb/issues/new).
## License
This project is dual licensed under the MIT License and the Apache License 2.0.
## Acknowledgements
If you use QMDB in a publication, please cite it as:
**QMDB: Quick Merkle Database**<br>
Isaac Zhang, Ryan Zarick, Daniel Wong, Thomas Kim, Bryan Pellegrino, Mignon Li, Kelvin Wong<br>
<https://arxiv.org/abs/2501.05262>
```bibtex
@article{zhang2025qmdb,
title={Quick Merkle Database},
author={Zhang, Isaac and Zarick, Ryan and Wong, Daniel and Kim, Thomas and Pellegrino, Bryan and Li, Mignon and Wong, Kelvin},
journal={arXiv preprint arXiv:2501.05262},
year={2025}
}
```
QMDB is a product of [LayerZero Labs](https://layerzero.network) Research.
<!-- markdownlint-disable MD033 -->
<p align="center">
<a href="https://layerzero.network#gh-dark-mode-only">
<img alt="LayerZero" style="width: 50%" src="https://github.com/LayerZero-Labs/devtools/raw/main/assets/logo-dark.svg#gh-dark-mode-only"/>
</a>
<a href="https://layerzero.network#gh-light-mode-only">
<img alt="LayerZero" style="width: 50%" src="https://github.com/LayerZero-Labs/devtools/raw/main/assets/logo-light.svg#gh-light-mode-only"/>
</a>
</p>
<p align="center">
<a href="https://layerzero.network" style="color: #a77dff">Homepage</a> | <a href="https://docs.layerzero.network/" style="color: #a77dff">Docs</a> | <a href="https://layerzero.network/developers" style="color: #a77dff">Developers</a>
</p>
|
https://github.com/sigwl/AiDA
|
AiDA
An AI-powered assistant for IDA 9.0+ to accelerate reverse engineering of C++ games.
Languages: C++ (94.2%), Python (5.8%)
AiDA
AiDA
libs
libs
python
python
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
SETUP.md
SETUP.md
actions.cpp
actions.cpp
> README.md
<h1 align="left">AiDA - AI Assistant for IDA Pro</h1>
<p align="left">
<img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License">
<img src="https://img.shields.io/github/stars/sigwl/AiDA" alt="Stars">
<img src="https://img.shields.io/github/forks/sigwl/AiDA" alt="Forks">
</p>
<p>AiDA is a high-performance, AI-powered assistant plugin for IDA Pro (9.0+) written in C++ to provide maximum speed and stability. It's designed to accelerate the reverse engineering of modern C++ games by leveraging large language models (Google Gemini, OpenAI, and Anthropic) directly within the IDA environment.</p>
<p><a href="#features">Features</a> •
<a href="#installation">Installation</a> •
<a href="#configuration">Configuration</a> •
<a href="#usage">Usage</a> •
<a href="#important-note">Important Note</a> •
<a href="#license">License</a> •
<a href="https://discord.gg/JMRkEThbUU">Discord</a>
</p>
<h2>Features</h2>
* **(COMING SOON!) Hybrid Engine Scanning:** Combines static pattern scanning (GSpots) and advanced AI analysis to locate critical Unreal Engine globals like `GWorld`, `GNames`, and `GObjects`.
* **In-Depth Function Analysis:** Provides a detailed report on a function's purpose, logic, inputs/outputs, and potential game hacking opportunities.
* **Automatic Renaming:** Suggests descriptive, context-aware names for functions.
* **Struct Generation:** Reconstructs C++ structs from function disassembly, automatically handling padding and member offsets.
* **Hook Generation:** Creates C++ MinHook snippets for easy function interception.
* **Custom Queries:** Ask any question about a function and get a direct, technical answer.
* **Multi-Provider Support:** Works with Google Gemini, OpenAI (ChatGPT), and Anthropic (Claude) models.
* **Native Performance:** Written in C++ for a seamless and fast user experience with no Python dependency.
## Installation
To install and run AiDA, follow these steps:
### Prerequisites
Before installing the AiDA plugin, ensure you have the following essential dependencies:
1. **Microsoft Visual C++ Redistributables:** Install the official Microsoft Visual C++ Redistributables. These are crucial for many C++ applications on Windows.
2. **OpenSSL:** Install OpenSSL. For Windows, a reliable third-party installer can be found at [https://slproweb.com/products/Win32OpenSSL.html](https://slproweb.com/products/Win32OpenSSL.html).
* The "Win64 OpenSSL v3.x.x Light" version should typically be sufficient.
* Please use the installer (`.exe`). During the installation process, it is critical to choose the following option when prompted:
* Copy OpenSSL DLLs to:
* ✅ **The Windows system directory** (check this one!)
* 🚫 The OpenSSL binaries (`/bin`) directory (do **not** check this one!)
### Plugin Installation
Once the prerequisites are met:
1. Go to the [**Releases**](https://github.com/sigwl/AiDA/releases) page of this repository.
2. Download the latest release ZIP file (e.g., `AiDA_v1.1.zip`).
3. Extract the archive. You will find an `AiDA.dll` file.
4. Copy `AiDA.dll` into your IDA Pro plugins directory. The path is typically:
* `%APPDATA%\Hex-Rays\IDA Pro\plugins` on Windows
* `$HOME/.idapro/plugins` on Linux/Mac
## MCP Installation
AiDA also supports Model Context Protocol (MCP) integration. This feature is based on the excellent work from [ida-pro-mcp](https://github.com/mrexodia/ida-pro-mcp) by mrexodia.
### Prerequisites
Ensure you have **Python 3.11** or higher installed on your system.
### Installation Steps
1. Install AiDA via pip:
```bash
pip install git+https://github.com/sigwl/AiDA
```
2. Run the installation command to automatically copy the plugin to your IDA Pro plugins directory:
```bash
aida --install
```
3. Open IDA Pro, go to **Edit → Plugins**, and click **AiDA-MCP** to activate the Model Context Protocol support.
## Configuration
1. The first time you run IDA Pro with the plugin, it will prompt you to open the settings dialog.
2. You can also access it at any time via the right-click context menu in a disassembly or pseudocode view: `AI Assistant > Settings...`.
3. In the settings dialog, select your desired AI Provider and enter your API key. The key will be saved locally in your user directory (`%APPDATA%\Hex-Rays\IDA Pro\ai_assistant.cfg`) and is never transmitted anywhere except to the AI provider's API.
### GitHub Copilot Configuration (Special Instructions)
Using GitHub Copilot requires an external proxy server that translates Copilot's API into a standard format.
**Step 1: Run the Copilot API Proxy**
You must have the `copilot-api` server running in the background. This server handles authentication with your GitHub account.
1. Make sure you have [Bun](https://bun.sh/) installed.
2. Open a terminal or command prompt and run the following command:
```bash
npx copilot-api@latest start
```
3. The first time you run this, it will guide you through a one-time authentication process with GitHub.
4. Leave this terminal window open. The proxy server must be running for AiDA to use Copilot.
**Step 2: Configure AiDA**
1. In IDA, open the AiDA settings (`AI Assistant > Settings...`).
2. Set the **Provider** to `Copilot`.
3. Ensure the **Proxy Address** in the `Copilot` tab is correct. The default is `http://127.0.0.1:4141`, which should work if you ran the command above without changes.
4. Select your desired Copilot model (e.g., `claude-sonnet-4`).
### API Provider Configuration
* **Provider:** Choose the AI service you want to use (Gemini, OpenAI, or Anthropic).
* **API Key:** Your personal key for the selected provider. This is required for authentication.
* **Model Name:** Specify which model to use. More powerful models (like Gemini 2.5 Pro or Claude 4 Opus) provide higher-quality analysis but cost more per use. Lighter models (like Gemini 1.5 Flash or GPT-4o mini) are faster and cheaper.
> **IMPORTANT: Model Choice Determines Output Quality**
> The quality of the AI model you select is the single most important factor affecting the accuracy and insightfulness of the results. For critical analysis of complex functions, using a top-tier model is **strongly recommended**.
>
> For example, a powerful model like **Google's Gemini 2.5 Pro** will consistently provide more comprehensive and correct analysis than a lighter, faster model like **Gemini 1.5 Flash**.
### Analysis Parameters
* **Max Prompt Tokens:** This is a critical setting for managing cost and quality. It limits the total amount of context (your function's code, cross-references, etc.) sent to the AI.
* **Higher Value (e.g., 30,000):** Provides the AI with more context, leading to more accurate and detailed analysis. This is more expensive and slightly slower.
* **Lower Value (e.g., 8,000):** Cheaper and faster, but the AI may miss important details due to the limited context.
* **XRef Context Count:** The maximum number of calling functions (callers) and called functions (callees) to include in the prompt. Increasing this gives the AI a better understanding of the function's role.
* **XRef Analysis Depth:** How "deep" to go in the call chain when gathering context. A depth of `1` gets direct callers; a depth of `2` gets direct callers *and* their callers.
> **Warning:** A depth greater than 3 can cause the context size to grow extremely quickly. However, a higher value is often necessary for a complete analysis of complex call chains.
* **Code Snippet Lines:** The number of lines of decompiled code to include for each cross-reference. **A high value (e.g., 60-100) is recommended to give the AI better context.**
* **Bulk Processing Delay:** A delay (in seconds) between consecutive API calls during automated tasks like the Unreal Scanner. This is a safety feature to prevent you from being rate-limited by the API provider.
## Usage
Simply right-click within a disassembly or pseudocode view in IDA to access the `AI Assistant` context menu. From there, you can select any of the analysis or generation features. All actions can also be found in the main menu under `Tools > AI Assistant`.
## Important Note
Please be aware that AiDA is currently in **BETA** and is not yet fully stable. You may encounter bugs or unexpected behavior.
If you experience any issues or have bug reports, please:
* Create an issue on the [GitHub repository](https://github.com/sigwl/AiDA/issues).
* Join our Discord server for support and discussions: [https://discord.gg/JMRkEThbUU](https://discord.gg/JMRkEThbUU)
* Or, reach out to **"firewl"** on Discord by sending a friend request.
## License
This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
|
https://github.com/Grubre/smol-gpu
|
smol-gpu
An rv32i inspired ISA, SIMT GPU implementation in system-verilog.
Languages: C++ (64.3%), SystemVerilog (33.0%), CMake (2.1%)
external
external
readme
readme
sim
sim
src
src
test
test
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.clangd
.clangd
.gitignore
.gitignore
.rules.verible_lint
.rules.verible_lint
> README.md
# Smol GPU
[]()
[]()
[]()
[]()
An educational implementation of a parallel processor in system-verilog.
The [Intro to GPU Architecture](#intro-to-gpu-architecture) chapter is a short write-up on the theoretical basics needed to understand the GPU implemented in this repository.
If you want to set up the simulation on your machine, see [Simulation](#simulation) and [Project Structure](#project-structure).
- [Introduction](#introduction)
- [Intro to GPU Architecture](#intro-to-gpu-architecture)
- [Comparison with CPUs](#comparison-with-cpus)
- [SIMT Architecture](#simt-architecture)
- [Branching](#branching)
- [ISA](#isa)
- [Vector Registers](#vector-registers)
- [Scalar Registers](#scalar-registers)
- [Instructions](#instructions)
- [Instruction List](#instruction-list)
- [Assembly](#assembly)
- [Syntax](#syntax)
- [Example](#example)
- [Microarchitecture](#microarchitecture)
- [Project Structure](#project-structure)
- [Simulation](#simulation)
- [Justfile](#justfile)
- [CMake](#cmake)
- [Running the Simulator](#running-the-simulator)
- [Acknowledgments](#acknowledgments)
- [Roadmap](#roadmap)
## Introduction
The purpose of this project was to create an open-source GPU which can serve as an introduction to modern GPU architecture.
The project is heavily influenced [tiny-gpu](https://github.com/adam-maj/tiny-gpu).
It builds upon tiny-gpu by incorporating a more advanced ISA (based on RISC-V RV32I), having multiple warps per each core and supporting branching, among other things.
For someone trying to learn how a GPU works, I still recommend checking out tiny-gpu first, and only then coming back to this project.
The rest of this chapter is an introduction to GPU architecture.
## Intro to GPU Architecture
Nowadays, graphics cards are designed with the purpose of processing large amounts of data in a parallel manner.
The massive parallelism seen in GPUs stems from their initial purpose - processing data for each pixel on the screen.
In the early 2000s, programmers realised that this computational model can be used for more than just graphics programming.
Thus, we got cards like NVIDIA Geforce series 3, GeForce 4 or ATI Radeon 8500, which were first to introduce programmable shaders.
Later on, that evolved into frameworks like CUDA, and so currently graphics cards are widely used for parallel computation in fields such as machine learning, cryptography or scientific computing.
### Comparison with CPUs
Most modern CPUs are designed to be versatile in their function.
They have to perform both sequential and parallel computations while also running the operating system and handling I/O operations.
In contrast to that, GPUs are designed with a single goal in mind - processing as much data in parallel as possible.
The currently used paradigm that helps achieve that is called **SIMT (Single Instruction Multiple Thread)**, which is described in the next subchapter.
### SIMT architecture
We have two goals when designing a GPU.
The first is to be able to process as much data in parallel as possible.
The second goal is to avoid breaking the fully parallel programming model.
What that means, is that from the perspective of a programmer, we want to create an illusion of all the computation happening in parallel.
For example, when someone runs this sort of CUDA code:
```cpp
vector_add <<< 256 , 1024 >>> (d_out, d_a, d_b, N);
```
What they expect, is that there will be 256 blocks of 1024 threads running in parallel.
However, in practice, we are limited by the hardware - we can't actually have an arbitrary number of independent cores running the computation.
In that case, what is the best way to create a chip that is both efficient in parallel computation and possible to implement using actual hardware?
The answer is **Multi-threaded** architecture.
Our GPU will have multiple **cores** (known as SMs in Nvidia's terminology), which are independent processors.
Each of those cores has many **threads** which are grouped into **warps**.
Threads within a single warp execute the same instruction but with different data and state.
Each of them has it's own set of registers, which are called **vector registers**.
As previously mentioned, all the threads within a warp execute the same instruction in lockstep.
That means that every warp has a separate program counter which is shared between the threads.
The warps also have their own set of registers, which are called **scalar registers**.
So, why do we organize our GPU in such a way?
We are taking advantage of the characteristics of modern processors - some operations take more time than others.
As an example, memory access can be a few orders of magnitude slower than a simple add instruction and in most cases, fetching from memory is what a processor spends most of it's time doing.
Reducing or masking the latency caused by memory is a good way to make our processor faster.
The number of threads is much greater than the number of units such as ALU (Arithmetic Logic Unit) or LSU (Load Store Unit).
At each point in time, only one of the warps has access to the resources of the core while others do some work in the background, like fetching an instruction or data from memory.
With that architecture, we can relatively cheaply increase the number of threads within a core, because the number of warps is independent of the number of resources (ALUs, LSUs).
One obvious issue is a situation in which threads within a single warp take divergent paths within the code (one chooses `if` and another `else`).
### Branching
There are a couple ways to solve this problem.
The one I will describe here and that I implemented inside this GPU uses masking.
As previously mentioned, each of the warps is given a set of registers.
One of those registers is called **the execution mask**.
Each of it's bits corresponds to one of the threads within that warp and denotes whether this particular thread should execute the next instruction (1 -> execute, 0 -> noop).
In addition to that, we need an instruction which will set those bits based on certain conditions.
For example, RISC-V has a `slti, rd, rs1, imm` instruction, which compares a `rs` register to an immediate value and outputs a single bit to the `rd` register - `1` if it's true and `0` otherwise.
Now, let's modify this instruction in such a way, that each of the threads within the warp modifies a single bit in one of the warp's registers.
Then we can run comparisons on each of the threads within a warp independently and mask the execution of next instructions based on the outcome.
What a compiler developer might do, is generate code that executes both paths of the `if` statement.
For the first path we use the execution mask produced by the compare function.
For the second path we invert the mask and execute it as well.
If we run into nested ifs, we can create a software stack which will keep the previous mask.
An example is shown in the picture below:

*Image taken from General-Purpose Graphics Processor Architecture (2018).*
## ISA
The GPU itself, is based on a 32-bit word, 32-bit address space ISA that closely resembles RV32I.
Some of the instructions that don't apply to a GPU design have been cut out (fence, csrrw, etc).
Also, currently, there is also no support for unsigned arithmetic instructions.
In order to differentiate between the warp and thread registers or instructions, the first ones will be called **scalar** and the second ones will be called **vector**.
### Vector Registers
Each of the threads within a warp has 32 of 32-bit registers.
As mentioned above, those are called vector registers and will be denoted with an `x` prefix (`x0`-`x31`).
Just like RV32I, `x0` is a read-only register with value 0.
However, for the purposes of GPU programming, registers `x1` - `x3` are also read-only and have a special purpose.
Namely, they contain the thread id, block id and block size, in that order.
The rest of the registers (`x4` - `x31`) are general purpose.
|**Register**|**Function** |
|------------|---------------|
|`x0` |zero |
|`x1` |thread id |
|`x2` |block id |
|`x3` |block size |
|`x4`-`x31` |general purpose|
### Scalar registers
Similarly to their vector counter part, there are 32 scalar registers that hold 32-bit words.
In order to differentiate between them, the scalar registers are prefixed with `s` (`s0`-`s31`).
The zero-th register is also tied to 0.
Register `s1` is called the execution mask and has a special purpose but is not read-only.
As mentioned in the intro, each of the bits in that register denotes whether the corresponding thread should execute the current instruction.
This is also the reason why the GPU can be configured to have at most 32 threads per warp (size of the register).
|**Register**|**Function** |
|------------|---------------|
|`s0` |zero |
|`s1` |execution mask |
|`s2`-`x31` |general purpose|
### Instructions
The instructions are split into three types:
- vector instructions
- scalar instructions
- vector-scalar instructions
Vector instructions are executed by each thread on the vector registers, scalar instructions by each warp on the scalar registers and the vector-scalar instructions are a mix (more on that later).
Which instruction is being executed is determined by three values:
- opcode,
- funct3,
- funct7
All of the vector instructions have their scalar equivalent but not vice versa.
Specifically, the jump and branch instructions are scalar-only, because only the warps have a program counter (`jal`, `jalr`, `beq`, `bne`, `blt`, `bge`).
The most significant bit of the opcode is always equal to 0 for vector instruction and to 1 for other types.
That means, that changing the instruction type from vector to scalar is equivalent to this operation `(opcode) & (1 << 6)`.
#### Instruction list
Below is the instruction list.
The `S` bit in opcode denotes whether the instruction is vector or scalar with (1 - scalar, 0 - vector).
| mnemonic | opcode | funct3 | funct7 |
|----------|---------|--------|-----------|
| **U-type** | | | |
| lui | S110111 | — | — |
| auipc | S010111 | — | — |
| **I-type arithmetic** | | |
| addi | S010011 | 000 | — |
| slti | S010011 | 010 | — |
| xori | S010011 | 100 | — |
| ori | S010011 | 110 | — |
| andi | S010011 | 111 | — |
| slli | S010011 | 001 | 0000000X |
| srli | S010011 | 101 | 0000000X |
| srai | S010011 | 101 | 0100000X |
| **R-type** | | | |
| add | S110011 | 000 | 00000000 |
| sub | S110011 | 000 | 01000000 |
| sll | S110011 | 001 | 00000000 |
| slt | S110011 | 010 | 00000000 |
| xor | S110011 | 100 | 00000000 |
| srl | S110011 | 101 | 00000000 |
| sra | S110011 | 101 | 01000000 |
| or | S110011 | 110 | 00000000 |
| and | S110011 | 111 | 00000000 |
| **Load** | | | |
| lb | S000011 | 000 | — |
| lh | S000011 | 001 | — |
| lw | S000011 | 010 | — |
| **Store** | | | |
| sb | S100011 | 000 | — |
| sh | S100011 | 001 | — |
| sw | S100011 | 010 | — |
| **J-type** | | | |
| jal | 1110111 | — | — |
| **I-type jumps** | | | |
| jalr | 1110011 | 000 | — |
| **B-type** | | | |
| beq | 1110011 | 000 | — |
| bne | 1110011 | 001 | — |
| blt | 1110011 | 100 | — |
| bge | 1110011 | 101 | — |
| **HALT** | | | |
| halt | 1111111 | — | — |
| **SX type** | | | |
| sx.slt | 1111110 | — | — |
| sx.slti | 1111101 | — | — |
## Assembly
Currently, the supported assembly is quite simple.
It takes a single input file and line by line compiles it into machine code.
There are two directives supported:
- `.blocks <num_blocks>` - denotes the number of blocks to dispatch to the GPU,
- `.warps <num_blocks>` - denotes the number of warps to execute per each block
Together they form an API similar to that of CUDA:
```cuda
kernel<<<numBlocks, threadsPerBlock>>>(args...)`
```
The key difference being that CUDA allows you to set the number of threads per block while this GPU accepts the number of warps per block as a kernel parameter.
A compiler developer can still implement the CUDA API using execution masking.
### Syntax
The general syntax looks as follows:
```
<mnemonic> <rd>, <rs1>, <rs2> ; For R-type
<mnemonic> <rd>, <rs1>, <imm> ; For I-type
<mnemonic> <rd>, <imm> ; For U-type
<mnemonic> <rd>, <imm>(<rs1>) ; For Load/Store
HALT ; For HALT
jalr <rd>, <label> ; jump to label
jalr <rd>, <imm>(<rs1>) ; jump to register + offset
```
In order to turn the instruction from vector to scalar you can add the `s.` prefix.
So if you want to execute the scalar version of `addi` you would put `s.addi` as the mnemonic and use scalar registers as `src` and `dest`.
Each of the operands must be separated by a comma.
The comments are single line and the comment char is `#`.
#### Example
An example program might look like this:
```python
.blocks 32
.warps 12
# This is a comment
jalr x0, label # jump to label
label: addi x5, x1, 1 # x5 := thread_id + 1
sx.slti s1, x5, 5 # s1[thread_id] := x5 < 5 (mask)
sw x5, 0(x1) # mem[thread_id] := x5 (only non-masked threads exectute this)
halt # Stop the execution
```
## Microarchitecture
todo
## Project structure
The project is split into several subdirectories:
- `external` - contains external dependencies (e.g. doctest)
- `src` - contains the system-verilog implementation of the GPU
- `sim` - contains the verilator based simulation environment and the assembler
- `test` - contains test files for the GPU, the assembler and the simulator
## Simulation
The prerequistes for running the simulation are:
- [verilator](https://www.veripool.org/wiki/verilator)
- [cmake](https://cmake.org/)
- A C++ compiler that supports C++23 (e.g. g++-14)
Verilator is a tool that can simulate or compile system-verilog code.
In this project, verilator translates the system-verilog code into C++ which then gets included as a library in the simulator.
Once the prerequistes are installed, you can build and run the simulator executable or the tests.
There are currently two ways to do this:
### Justfile
First, and the more convenient way, is to use the provided [justfile](https://github.com/casey/just).
`Just` is a modern alternative to `make`, which makes it slightly more sane to write build scripts with.
In the case of this project, the justfile is a very thin wrapper around cmake.
The available recipes are as follows:
- `compile` - builds the verilated GPU and the simulator
- `run <input_file.as> [data_file.bin]` - builds and then runs the simulator with the given assembly file
- `test` - runs the tests for the GPU, the assembler and the simulator
- `clean` - removes the build directory
In order to use it, just type `just <recipe>` in one of the subdirectories.
**Note, that the paths you pass as arguments to the `run` recipe are relative to the root of the project.
This is due to the way that the `just` command runner works.**
### CMake
As mentioned, the justfile is only a wrapper around cmake.
In case you want to use it directly, follow these steps:
```bash
mkdir build
cd build
cmake ..
cmake --build . -j$(nproc)
# The executable is build/sim/simulator
# You can also run the tests with the ctest command when in the build directory
```
### Running the simulator
The produced exectuable is located at `build/sim/simulator` (or you can just use the justfile).
You can run it in the following way:
```bash
./build/sim/simulator <input_file.as> <data_file.bin>
```
The simulator will first assemble the input file and load the binary data file into the GPU data memory.
The program will fail if the assembly code contained in the input file is ill-formed.
In case it manages to assemble the code, it will then run the simulation and print the first 100 words of the memory to the console.
This is a temporary solution and will be replaced by a more sophisticated output mechanism in the future.
## Acknowledgments
Special thanks go to Adam Majmudar, the creator of [tiny-gpu](https://github.com/adam-maj/tiny-gpu).
As previously mentioned, this project is heavily inspired by it and built on top of it.
The architecture itself is a modified variant of [RISC-V](https://github.com/riscv) RV32I.
Much of the knowledge I've gathered in order to create this project comes from the General-Purpose Graphics Processor Architecture book(2018) by Tor M. Aamodt, Wilson Wai Lun Fung and Timothy G. Rogers,
which I highly recommend for anyone interested in the topic.
## Roadmap
There is still a lot of work to be done around the GPU itself, the simulator and the tooling around it.
- [ ] Add more tests and verify everything works as expected
- [ ] Benchmark (add memory latency benchmarks, etc)
- [ ] Parallelize the GPU pipeline
- [ ] Simulate on GEM5 with Ramulator
- [ ] Run it on an FPGA board
Another step would be to implement a CUDA-like compiler as writing the assembly gets very tedious, especially with manually masking out the threads for branching.
|
https://github.com/deepseek-ai/DeepGEMM
|
DeepGEMM
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
Languages: C++ (44.6%), Cuda (43.0%), Python (11.6%)
csrc
csrc
deep_gemm
deep_gemm
tests
tests
third-party
third-party
...
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
README.md
README.md
> README.md
# DeepGEMM
DeepGEMM is a library designed for clean and efficient General Matrix Multiplications (GEMMs). It supports FP8 and BF16 (working in progress) for both normal and Mix-of-Experts (MoE) grouped scenarios. Written in CUDA, the library has no kernel compilation need during installation, by compiling all kernels at runtime using a lightweight Just-In-Time (JIT) module.
DeepGEMM leverages some concepts from [CUTLASS](https://github.com/nvidia/cutlass) and [CuTe](https://github.com/NVIDIA/cutlass/tree/main/include/cute), it avoids heavy reliance on their templates or algebras. Instead, the library is designed for simplicity, with only a limited number of core kernel functions. This makes it a clean and accessible resource for learning NVIDIA GPU kernel optimization techniques.
Despite its lightweight design, DeepGEMM's performance matches or exceeds expert-tuned libraries across various matrix shapes.
## News
- 2025.07.20: DeepGEMM now supports both SM90/SM100, and has a full refactor with a low-CPU-overhead JIT CPP module.
- NVRTC and post-compilation SASS optimization are all disabled
- NVRTC will be supported later
- As NVCC 12.9 will automatically do the FFMA interleaving, all post optimizations will be no longer supported
- Please see [#112](https://github.com/deepseek-ai/DeepGEMM/pull/112) for more details
- 2025.05.14: DeepGEMM now offers weight gradient kernels for dense and MoE backward! See [#95](https://github.com/deepseek-ai/DeepGEMM/pull/95) for details.
- 2025.05.07: DeepGEMM now supports NVRTC with up to 10x compilation speedup! See [#94](https://github.com/deepseek-ai/DeepGEMM/pull/94) for details. Please use `DG_JIT_USE_NVRTC=1` to enable it (may have performance loss with some cases).
- 2025.04.18: DeepGEMM now achieves up to **1550 TFLOPS** on H800! See [#74](https://github.com/deepseek-ai/DeepGEMM/pull/74), [#78](https://github.com/deepseek-ai/DeepGEMM/pull/78), [#81](https://github.com/deepseek-ai/DeepGEMM/pull/81), [#86](https://github.com/deepseek-ai/DeepGEMM/pull/86) and [340d988](https://github.com/deepseek-ai/DeepGEMM/commit/340d9880f4a418d943d34260d20a79f41f4c0526) for details.
## Roadmap
- [x] More correctness tests for grouped-contiguous layout
- [x] Shared memory swizzling for output
- [x] MoE scheduler with TMA multicast compatibility
- [x] Fix TMA multicast compatibility for indivisible shapes
- [x] Skip useless computation on M
- [ ] NVRTC as a faster compiler
- [ ] Sanitizer for testing
- [x] Weight gradient kernels for dense models
- [x] Weight gradient kernels for MoE models
- [ ] Better `get_best_configs` modeling
- [ ] CUDA PDL support
- [ ] Larger TMA multicast size for some shapes
- [x] MMA template refactor with CUTLASS
- [x] Remove shape limitations on N and K
- [ ] BF16 kernels
- [ ] Split/stream-k optimizations
- [ ] Ampere kernels
- [ ] Polish docs
## Quick start
### Requirements
- NVIDIA SM90 or SM100 architecture GPU
- Python 3.8 or higher
- Compilers with C++20 support
- CUDA Toolkit:
- Currently, CUDA 12.8 or higher is required, but support for older versions may be added in the future
- CUDA 12.8 or higher for SM90
- **We highly recommend 12.9 or higher for the best performance**
- CUDA 12.9 or higher for SM100
- PyTorch 2.1 or higher
- CUTLASS 4.0 or higher (could be cloned by Git submodule)
- `{fmt}` library (could be cloned by Git submodule)
### Development
```bash
# Submodule must be cloned
git clone --recursive git@github.com:deepseek-ai/DeepGEMM.git
cd DeepGEMM
# Link some essential includes and build the CPP JIT module
cat develop.sh
./develop.sh
# Test all GEMM implements
python tests/test_layout.py
python tests/test_core.py
```
### Installation
```bash
cat install.sh
./install.sh
```
Then, import `deep_gemm` in your Python project, and enjoy!
## Interfaces
#### Notices
This library provides optimized GEMM kernels for NVIDIA GPUs with a naming convention: `D = C + A @ B`. The input shape layout is NT (non-transposed A, transposed B). While the SM90 implementation supports only the NT memory layout (row-major, col-major), the SM100 implementation supports all memory layouts (NT, TN, NN, TT). For example, `fp8_gemm_nt` will do a `D = C + A @ B.T`
For both architectures, the LHS scaling factor is required to have a TMA-aligned and transposed layout. And the data format for the scaling factor of SM90 and SM100 is different:
- SM90 requires scaling factors in FP32 format.
- SM100 requires scaling factors in packed [UE8M0](https://docs.nvidia.com/cuda/parallel-thread-execution/#alternate-floating-point-data-formats) format, which packs 4 UE8M0 into a single `torch.int`.
Please note that operations like input transposition or FP8 casting must be handled separately by the user, please implement or fuse them into prior kernels independently. While the library provides some simple PyTorch utility functions, these may result in slower performance, but our primary focus is on optimizing the GEMM kernels themselves.
#### Normal dense GEMMs (non-grouped)
To perform a basic non-grouped FP8 GEMM, call the `fp8_gemm_{nt, nn, tn, tt}` function. For more details, please refer to the function documentation.
#### Grouped GEMMs (contiguous layout)
Unlike traditional grouped GEMMs in CUTLASS, DeepGEMM groups only the M-axis, while N and K must remain fixed. This design is tailored for scenarios where experts in an MoE model share the same shape. For training forward passes or inference prefilling, where each expert may process a varying number of tokens, we concatenate these tokens into a single tensor, referred to as the "contiguous" layout. Note that each expert segment must be aligned to the GEMM M block size (`get_mk_alignment_for_contiguous_layout()`). For more information, please refer to the `m_grouped_fp8_gemm_{nt, nn}_contiguous` function documentation.
We also provide a K-axis-grouped API for MoE weight backward (with M and N must remain fixed), please refer to `k_grouped_fp8_gemm_tn_contiguous` for more information.
#### Grouped GEMMs (masked layout)
During the inference decoding phase, when CUDA graph is enabled and the CPU is unaware of the number of tokens each expert receives, we support masked grouped GEMMs. By providing a mask tensor, the kernel computes only the valid portions.
Use `fp8_m_grouped_gemm_nt_masked` for this purpose and consult the relevant documentation. An example usage is to use the output of low-latency kernels from [DeepEP](https://github.com/deepseek-ai/DeepEP) as input.
#### Utilities
The library provides some utility functions besides the above kernels:
- `deep_gemm.set_num_sms`: set the maximum SM count to use
- `deep_gemm.get_num_sms`: get the current SM maximum count (return the device SM count if not set)
- `deep_gemm.transform_sf_into_required_layout`: transform scaling factors into required layout
- `deep_gemm.get_tma_aligned_size`: get the required TMA alignment size
- `deep_gemm.get_mk_alignment_for_contiguous_layout`: get the group-level alignment requirement for grouped contiguous layout
- `deep_gemm.get_mn_major_tma_aligned_tensor`: get a MN-major TMA-aligned tensor
- `deep_gemm.get_mn_major_tma_aligned_packed_ue8m0_tensor`: get a MN-major TMA-aligned tensor (with packing FP32 into UE8M0)
- `deep_gemm.get_k_grouped_mn_major_tma_aligned_packed_ue8m0_tensor`: K-grouped GEMM packing kernel
The library also provides some environment variables, which may be useful:
- General
- `DG_JIT_DEBUG`: `0` or `1`, print more JIT debugging information, `0` by default
- JIT cache related
- `DG_JIT_CACHE_DIR`: string, the cache directory to store compiled kernels, `$HOME/.deep_gemm` by default
- NVCC/NVRTC selections
- `DG_JIT_USE_NVRTC`: `0` or `1`, use NVRTC instead of NVCC, faster compilation but maybe have lower performance for some cases, `0` by default
- `DG_JIT_NVCC_COMPILER`: string, specified NVCC compiler path; will find in `torch.utils.cpp_extension.CUDA_HOME` by default
- Compiler options
- `DG_JIT_PTXAS_VERBOSE`: `0` or `1`, show detailed PTXAS compiler output, `0` by default
- `DG_JIT_PRINT_COMPILER_COMMAND`: `0` or `1`, print NVCC compilation command, `0` by default
- Heuristic selection
- `DG_PRINT_CONFIGS`: `0` or `1`, print selected configs for each shape, `0` by default
For additional examples and details, please refer to [the test code](tests/test_core.py) or review the corresponding Python documentation.
## Acknowledgement
DeepGEMM is inspired by the [CUTLASS](https://github.com/nvidia/cutlass) project. Thanks and respect to the developers!
## License
This code repository is released under [the MIT License](LICENSE).
|
https://github.com/nanochess/transputer
|
transputer
Transputer T805 emulator, assembler, Pascal compiler, operating system, and K&R C compiler.
Languages: C (67.5%), Pascal (23.5%), JavaScript (7.2%), Assembly (1.1%), HTML (0.5%), Shell (0.1%), Roff (0.1%)
js
js
m3d
m3d
os
os
os_final
os_final
pascal
pascal
...
.gitignore
.gitignore
LICENSE.txt
LICENSE.txt
README.md
README.md
README.png
README.png
README1.png
README1.png
> README.md
## Transputer T805 emulator
### (also including assembler, Pascal compiler, Small-C compiler and mini-OS, K&R C compiler and full OS)
#### by Oscar Toledo G. https://nanochess.org/
Once upon a time when I was a teen (1993), I wrote an almost full Pascal compiler for a transputer processor (the FILE type wasn't never completed)
It was the convergence of several things I had been learning that year: Pascal (The BYTE Book of Pascal), code generation (Compilers: Principles, Techniques and Tools), and transputer programming.
It was a time when the INMOS transputer promised parallel computing for everyone, but it was too expensive. They did a few good things, like a very fast 32-bit T805 transputer with 64-bit floating-point before the Intel 486DX2 was a thing.
In case you want to read the complete article (first in the series): [https://nanochess.org/pascal.html](https://nanochess.org/pascal.html)
I've added also my early operating system complete with simple command-line interface, text editor, C compiler, and assembler (second in the series): [https://nanochess.org/bootstrapping_c_os_transputer.html](https://nanochess.org/bootstrapping_c_os_transputer.html)
Lately I've added my full operating system complete with subdirectories, multiple drives, and with almost full K&R C compiler along syntax coloring for the editor (last in the series): [https://nanochess.org/transputer_operating_system.html](https://nanochess.org/transputer_operating_system.html)
Most recently I've added a Javascript version of the transputer emulator that can run immediately in your web browser: [https://nanochess.org/transputer_emulator.html](https://nanochess.org/transputer_emulator.html)
### What we have here
In order for you to experience my Pascal compiler, I needed to write two tools in modern C. Namely, the emulator for the transputer from the ground up, and port the assembler that ran on my Z280 host machine (the transputer was a board for the Z280 computer)
And nope, it wasn't a standard INMOS board, it was a board specifically designed for the Z280 computer.
The emulator at the start supported _only_ the instructions used by my bootstrap code, my Pascal compiler, and a Ray Tracer program I ported from C to Pascal. Later, I added a few more instructions as well.
Currently the core unhandled instructions are: _alt_, _talt_, _altwt_, _taltwt_, _altend_, _dist_, _disc_, _diss_, _enbt_, _enbc_, _enbs_, _fmul_, and _stoperr_.
The T414 unhandled instructions are _unpacksn_, _postnormsn_, _roundsn_, _ldinf_, and _cflerr_.
The T800 unhandled instructions are _move2dinit_, _move2dall_, _move2dnonzero_, _move2dzero_, _bitcnt_, _bitrevword_, _bitrevnbits_, _fpremfirst_, _fpremstep_, _fpnan_, _fpordered_, and _fpnotfinite_.
Finally, the T805 unhandled instructions are _break_, _clrj0break_, _setj0break_, _testj0break_, and _lddevid_, along with support for _j 0_ to be taken as a breakpoint.
The assembler on the other side is based on more modern code used for my later C compiler for transputer (described in the 2nd and 3rd articles), and supports the full instruction set for an Inmos T805 transputer.
Compilation instructions (macOS):
cc tem.c -o tem
cc tasm.c -o tasm
Compilation instructions (Linux):
cc tem.c -o tem -lm
cc tasm.c -o tasm
Compilation instructions (Windows Visual Studio 2022):
cl tem.c -o tem
cl tasm.c -o tasm
For Windows replace the slash / with the backslash \
### Pascal compiler
The Pascal compiler follows the Niklaus Wirth's 1971 specification, and it is composed of the following files:
pascal/VARIABLE.PAS
pascal/ERRORES.PAS
pascal/ANALEXIC.PAS
pascal/GENCODIG.PAS
pascal/EXPRESIO.PAS
pascal/SENTENCI.PAS
pascal/DECLARAC.PAS
pascal/PRINCIPA.PAS
A transputer executable is provided so you can compile programs immediately:
pascal/pascal.cmg
An older version of the executable is provided for historical purposes (it has a bug handling NIL):
pascal/pascal0.cmg
Two example programs are provided:
pascal/Hanoi.pas Hanoi tower solver (based on a book, but I forgot which one)
pascal/Animales.pas The animals question game.
To compile a Pascal program use this (also in *compile.sh*):
./tem pascal/pascal.cmg Animales.pas >animales.len
./tasm animales.len animales.cmg library.len
To execute the compiled result:
./tem animales.cmg
Also there is longer command-line for compiling the Pascal compiler. Because it is somewhat cumbersome, I put it on *make_compiler.sh*
The file *library.len* contains the support library for the Pascal compiler.
The len extension means Listado ENsamblador (Assembler listing), while the CMG extension means (Codigo Maquina G10, or G10 machine code, where G10 was the name given to the transputer board)
## Ray tracer
Once my Pascal compiler was working, I ported the Ray Tracer from the book "Programming in 3 Dimensions: 3-D Graphics, Ray Tracing, and Animation" by Watkins & Sharp.
You can compile it doing this:
./tem pascal/pascal.cmg pascal/M3D.PAS >m3d.len
./tasm m3d.len m3d.cmg library.len
To execute it:
./tem m3d.cmg m3d/BOLACRIS.M3D
Although originally the image was displayed directly on the screen using a different "driver" program, I considered the complications of adding the libSDL library to handle display weren't worth it, and instead I've adapted the code as necessary to avoid making yet another emulator executable, so a BMP image file named image001.bmp will appear on your directory.
I did a few demos and animations. I still haven't found the animations.
You can also find a Julia demo as pascal/julia.pas ported from the same book.

## Small-C compiler
The Small-C compiler is based on the Ron Cain's public domain Small-C compiler published in Dr. Dobb's Journal issue 45. I've ported it to transputer, and made a very much enhanced version that generates pretty small code using my tree generator evaluator (the compiler sizes up at 16 kb of code).
To execute it:
./tem -cc os/TC2.CMG
The first two questions can be answered N (stop on errors and show C language source code). It will then ask for the input file name, and the output file name.
The resulting assembly file can be passed through tasm, and added STDIO2.LEN for executing it using the emulator, or STDIO3.LEN for executing it inside the operating system (see below).
## Early operating system
This is my early version of my first operating system (Jun/1995). It is composed of several files:
os/ARRANQUE.LEN Boot sector.
os/EDITOR.C Visual text editor for running it inside the OS.
os/ENSG10.C The transputer assembler for running it inside the OS.
os/INTERFAZ.C The command-line interpreter for the OS.
os/MONITOR.C Debugging monitor.
os/SOM32.C The operating system (SOM32 stands for Sistema Operativo Mexicano 32 bits)
os/TC.C The Small-C compiler.
os/TC2.C The Small-C compiler with optimized code generator.
os/MENSAJES.LEN Library for assembling som32.c
os/STDIO.LEN Library for the tc.c compiler (running in host)
os/STDIO2.LEN Library for the tc2.c compiler (running in host)
os/STDIO3.LEN Library for the tc2.c compiler (running inside the OS)
os/buildboot.c Program to build a 1.44 mb disk image file.
To run the operating system (using the prebuilt disk image):
./tem -os os/MAESTRO.CMG os/disk.img
For macOS, I suggest to set your terminal in ANSI/VT100 mode, 80 columns by 25 rows, and using PC-8 or Latin/USA DOS character set. For recent Windows 10, the emulator will enable automatically the ANSI emulation.
The disk image is built with os/build_disk.sh

Each compiled C file generates a LEN file. There are many LEN files, so I've provided os/assemble_os.sh for assembling all in one pass.
It requires the host system to provide an ANSI escape terminal, because it refreshes the terminal like a text framebuffer. It works just fine in macOS, Windows, and Linux, including mapping the function and arrows keys for the visual text editor.
This environment is pretty powerful, as I evolved the operating system starting from this.

## Full operating system
This is my full-blown operating system (Spring 1996), it includes a lot of features like multiple drives (A: is floppy, B: is RAM disk, C: is hard drive, D: is a CD-ROM in ISO-9660 format)
The C compiler supports the full K&R syntax (except for static and extern, because there's no linker).
To run the operating system (using the prebuilt disk image):
./tem -os2 os_final/MAESTRO.CMG os_final/floppy.img os_final/harddisk.img
You can add optionally an extra argument with an ISO file for getting CD-ROM access.
I suggest to set your macOS terminal to ANSI/VT100 mode, 80 columns by 25 rows, and using ISO Latin 1 or ISO-8859-1 character set (this is automatically done in a recent build of Windows 10). My personal terminal added block shapes in the characters $80-$9f, but these will appear as blank in macOS, or weird symbols in Windows and Linux.
Some commands you can test inside the operating system:
DIR A:
DIR C:
AYUDA
MEM
C:EDITOR
In macOS you can use Fn+F1 to access the help box of the visual text editor, and type Fn+F4 to open the directory browsing for reading text files.
In Windows and Linux you can use F1 to access the help box of the visual text editor, and type F4 to open the directory browsing for reading text files.
Use C:CC to invoke the C compiler, C:ENS to invoke the assembler, C:EJECUTABLE to build assembler output into a working executable. There are instructions for compiling programs in the C:/Documentos/Programas.doc file.
This is an example compilation of a program:
C:CC
N
N
C:/C/Hora.c
B:Hora.len
C:ENS
B:Hora.len
C:/Lib/stdio.len
[empty line]
B:Hora.e
C:EJECUTABLE
B:Hora.e
512
0
C:Hora.p
The disk images are built with build_f1.sh, build_f2.sh, and build_hd.sh and require some time for you to copy the files into the drives (from the emulated floppy disk to the emulate hard disk drive).
After you do some developing inside the hard disk drive image, you need a way to extract back the data, so I've developed the extractimage.c utility, in order to dump a complete hard disk drive image as a tree of files.

## Javascript emulator
Someone asked me about an online version of the emulator, and I thought it was a good idea, because installing the Visual Studio C compiler in Windows can take around one hour an half.
It wasn't so easy because Javascript always works with floating-point, and the bitwise operations convert anything into signed 32-bit integers.
I've used the excellent jsTerm package (MIT license) to provide the output, and also I was able to integrate my video font (an improved VGA font along box graphics).
You can find it in the JS directory.
It is also hosted here: [https://nanochess.org/transputer_emulator.html](https://nanochess.org/transputer_emulator.html)
## ISA board
Recently I made an ISA board compatible with the Inmos B004, because some people was asking about the possibility of running my software on real hardware. So I made programs to run with this real hardware.
The _tram_ directory contains the _comm.asm_ program for MS-DOS that replicates the input/output function of my transputer emulator.
Currently you can run the Pascal compiler, the Ray Tracer, and any compiled Pascal program. I've ported my transputer assembler _tasm_ to Turbo C++ 3.0, so you have the complete toolchain to rebuild the Pascal compiler.
In case you want to rebuild the assembler with Turbo C++ 3.0, you just need to set the Compact model (compiler options), load the _tram/tasm.c_ file, and do _Build all_
The _comm2.asm_ program for MS-DOS allows you to run my small operating system from a real 1.44mb floppy disk.
I've tested also the Inmos Occam compiler and it works just fine.
The _pcb_ directory contains the schematics for my board, along the PCB files to order it from PCBway. I've been careful of not using surface mount components to ease building. You can select the link speed between 10 mbits and 20 mbits, and the port 0150H or 0170H.
Please notice that the port 0170h is only for a software I want to write in the future, and it isn't compatible with the Inmos software, because the Error bit is still located at 0160h (not used by my software).

## Further notes
The original programs are under _pascal/original_ because I translated _Animales.pas_ to English. I intend to translate also the compiler error messages, but in the meanwhile it isn't urgent.
The _tasm_ (transputer assembler) program is still in Spanish. It should be translated to English.
I'm afraid the whole of the Pascal files are commented in Spanish and even the variable names are Spanish. Also the complete operating system, K&R C compiler, and assorted utilities. But given it is a ton of code, I preferred to leave it as such.
|
https://github.com/NVIDIA-RTX/NVRHI
|
NVRHI
Languages: C++ (98.7%), CMake (1.3%)
.github/workflows
.github/workflows
cmake
cmake
doc
doc
include/nvrhi
include/nvrhi
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
CLA.txt
CLA.txt
CMakeLists.txt
CMakeLists.txt
LICENSE.txt
LICENSE.txt
> README.md
# NVRHI
[](https://github.com/NVIDIA-RTX/NVRHI/actions/workflows/build.yml)
## Introduction
NVRHI (**NV**IDIA **R**endering **H**ardware **I**nterface) is a library that implements a common abstraction layer over multiple graphics APIs (GAPIs): Direct3D 11, Direct3D 12, and Vulkan 1.2. It works on Windows (x64 only) and Linux (x64 and ARM64).
Key features:
- Automatic tracking of resource states and barrier placement (optional).
- Automatic tracking of resource usage and lifetime, deferred and safe resource destruction.
- Convenient and efficient resource binding model with little runtime overhead.
- Easy direct interaction with the underlying GAPI when necessary.
- Easy portability of the rendering code between the supported GAPIs.
- Hidden sub-allocation of upload buffers and versioning of constant buffers.
- Parallel command list recording and multi-queue rendering.
- Supports all types of pipelines: Graphics, Compute, Ray Tracing, and Meshlet.
- Validation layer and resource reflection for easy debugging.
NVRHI is used in several NVIDIA SDKs:
- [Adaptive and Variable-Rate Shading SDK](https://github.com/NVIDIAGameWorks/nas-sample)
- [Donut Framework](https://github.com/NVIDIA-RTX/Donut) and its [Samples](https://github.com/NVIDIA-RTX/Donut-Samples)
- [In-Game Inference (NVIGI) Sample](https://github.com/NVIDIA-RTX/NVIGI-3d-Sample)
- [Opacity Micro-Map SDK](https://github.com/NVIDIA-RTX/OMM)
- [RTX Character Rendering SDK](https://github.com/NVIDIA-RTX/RTXCR)
- [RTX Mega Geometry SDK](https://github.com/NVIDIA-RTX/RTXMG)
- [RTX Neural Shading SDK](https://github.com/NVIDIA-RTX/RTXNS)
- [RTX Neural Texture Compression SDK](https://github.com/NVIDIA-RTX/RTXNTC)
- [RTX Path Tracing SDK](https://github.com/NVIDIA-RTX/RTXPT)
- [RTX Texture Filtering SDK](https://github.com/NVIDIA-RTX/RTXTF)
- [RTX Texture Streaming SDK](https://github.com/NVIDIA-RTX/RTXTS)
- [RTXDI SDK](https://github.com/NVIDIA-RTX/RTXDI)
- [RTXGI SDK](https://github.com/NVIDIA-RTX/RTXGI)
Notable third-party projects using NVRHI:
- [RBDoom3-BFG](https://github.com/RobertBeckebans/RBDOOM-3-BFG)
Various early versions of NVRHI have been used in various projects created at NVIDIA, including:
- [Asteroids demo](https://developer.nvidia.com/blog/using-turing-mesh-shaders-nvidia-asteroids-demo)
- [DLSS SDK](https://developer.nvidia.com/dlss)
- [VRWorks](https://developer.nvidia.com/vrworks)
- [VXGI](https://developer.nvidia.com/vxgi)
- [WaveWorks](https://developer.nvidia.com/waveworks)
## Requirements
* Windows or Linux (x64 or ARM64)
* CMake 3.10
* A C++ 17 compiler (Visual Studio 2019, GCC 8 or Clang 6)
* Windows SDK version 10.0.22621.0 or later for DX12 support
## Building NVRHI
NVRHI can be configured to be used a set of static libraries in CMake-based projects, or as a single dynamic library.
To include NVRHI into a CMake project as static libraries:
1. Add this repository as a submodule.
2. Add a `add_subdirectory(nvrhi)` directive to the parent CMakeLists.txt.
3. Add dependencies to the necessary targets:
* `nvrhi` for the interface headers, common utilities, and validation;
* `nvrhi_d3d11` for DX11 (enabled when `NVRHI_WITH_DX11` is `ON`);
* `nvrhi_d3d12` for DX12 (enabled when `NVRHI_WITH_DX12` is `ON`); and
* `nvrhi_vk` for Vulkan (enabled when `NVRHI_WITH_VULKAN` is `ON`).
To build NVRHI as a shared library (DLL or .so):
1. Clone this repository recursively (including submodules).
2. Generate the project with CMake:
* Set the `NVRHI_BUILD_SHARED` variable to `ON`.
* Make sure to set the target platform to a 64-bit one. 32-bit builds are not supported.
3. Build and install as normal.
## Using NVRHI in Applications
See the [programming guide](doc/ProgrammingGuide.md) and the [tutorial](doc/Tutorial.md).
## NVAPI Support
NVRHI includes optional support for certain DX11 and DX12 extensions available through the NVAPI library. The library is not distributed with NVRHI but is available separately [here](https://developer.nvidia.com/nvapi).
To enable NVAPI support, extract the NVAPI SDK into the `nvapi` subfolder of your main project and set the `NVRHI_WITH_NVAPI` CMake variable to `ON`.
The following extensions are supported:
- Cluster Level Acceleration Structures (DX12)
- Linear Swept Spheres (DX12, Blackwell+)
- Opacity Micro-Maps (DX12, Ada+)
- Shader Execution Reordering on DX12 (DX12, Ada+)
- Single Pass Stereo (Pascal+)
- Fast Geometry Shader with optional coordinate swizzling (Maxwell+)
- Conservative Raster and other rasterizer features (Maxwell+)
- HLSL Extensions through a fake UAV slot (see [this blog post](https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl))
## RTXMU Integration
NVRHI includes an optional integration of the [RTXMU](https://github.com/NVIDIA-RTX/RTXMU) library. The library is included as a git submodule, and can be enabled with the `NVRHI_WITH_RTXMU` CMake variable.
When RTXMU integration is enabled, all bottom-level ray tracing acceleration structures (BLAS'es) are managed by that library. All built BLAS'es that have the `AllowCompaction` flag set are automatically compacted when `ICommandList::compactBottomLevelAccelStructs` method is called. No other configuration is necessary.
## License
NVRHI is licensed under the [MIT License](LICENSE.txt).
|
https://github.com/akamai/Mirage
|
Mirage
Mirage is a PoC memory evasion technique that relies on a vulnerable VBS enclave to hide shellcode within VTL1.
Languages: C++ (100.0%)
Mirage
Mirage
...
Mirage.sln
Mirage.sln
README.md
README.md
mirage.gif
mirage.gif
prefs_enclave_x64.dll
prefs_enclave_x64.dll
> README.md
# Mirage

Mirage is a PoC memory evasion technique that relies on a vulnerable VBS enclave to hide shellcode within VTL1.
For additional information please refer to our blogpost:
https://www.akamai.com/blog/security-research/2025-february-abusing-vbs-enclaves-evasive-malware
## Operation
The code performs the following steps:
1. Loads a vulnerable version of the "prefs_enclave_x64.dll" enclave
2. Call the vulnerable "SealSettings" function to store shellcode and a "cleanup buffer" inside the enclave
3. Allocate an empty RWX buffer in VTL0
4. Call the vulnerable "UnsealSettings" function to write the shellcode from the enclave into the VTL0 executable buffer
5. Jump to shellcode
6. When the shellcode returns, call the vulnerable "UnsealSettings" function to overwrite the VTL0 shellcode buffer with the cleanup buffer
7. Sleep for 5 seconds and repeat from step 4
*This implementation is very simplistic and is only meant to demonstrate the concept - adjustments are certainly required to weaponize it.*
## Credits
Alex Gough of the Chrome Security Team for the POC exploit for CVE-2023-36880:
https://github.com/google/security-research/security/advisories/GHSA-wwr4-v5mr-3x9w
## License
Copyright 2025 Akamai Technologies Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
https://github.com/BlinkDL/fast.c
|
fast.c
Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.
Languages: C (100.0%)
<no directories found>
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
dram.c
dram.c
gemv.c
gemv.c
> README.md
# fast.c
Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.
Current optimization target:
CPU = AMD 7700 (Zen 4)
DRAM = DDR5-6000 dual channel
SSD = 4 x PCIE5.0x4
Current results:
NVME raw speed = 53+ GB/s
DRAM processing = 60.8 GB/s
SSD processing = 29.4 GB/s (huge room for improvements, such as ovelapping io & compute, raw fs)
GEMV int8/32 = 75 ~ 147 GB/s (?!)
|
https://github.com/yosefk/funtrace
|
funtrace
A fast, small C/C++ function call tracer for x86-64/Linux, supports clang & gcc, ftrace, threads, exceptions & shared libraries
Languages: C++ (38.9%), Rust (28.2%), Python (22.5%), Assembly (5.2%), C (3.7%), Shell (1.5%)
compiler-wrappers
compiler-wrappers
funcount2sym
funcount2sym
funtrace2viz
funtrace2viz
images
images
procaddr2sym
procaddr2sym
...
.gitignore
.gitignore
Cargo.toml
Cargo.toml
LICENSE.txt
LICENSE.txt
README.md
README.md
fun_xray_so.S
fun_xray_so.S
> README.md
# funtrace - a C/C++ function call tracer for x86/Linux
A function call tracer is a kind of profiler showing **a timeline of function call and return events**. Here's an example trace captured by funtrace from [Krita](https://krita.org):

Here we can see 2 threads - whether they're running or waiting, and the changes to their callstack over time - and the source code of a selected function.
Unlike a sampling profiler such as perf, **a tracing profiler must be told what to trace** using some runtime API, and also has a **higher overhead** than the fairly low-frequency sampling of the current callstack a-la perf. What do you get in return for the hassle and the overhead (and the hassle of culling the overhead, by disabling tracing of short functions called very often)? Unlike flamegraphs showing where the program spends its time on average, traces let you **debug cases of unusually high latency**, including in production (and it's a great idea to collect traces in production, and not just during development!)
If you're interested in why tracing profilers are useful and how funtrace works, see [Profiling in production with function call traces](https://yosefk.com/blog/profiling-in-production-with-function-call-traces.html). What follows is a funtrace user guide.
- [Why funtrace?](#why-funtrace)
- [Trying funtrace](#trying-funtrace)
- [Runtime API for taking & saving trace snapshots](#runtime-api-for-taking--saving-trace-snapshots)
- ["Coretime" API for saving trace snapshots](#coretime-api-for-saving-trace-snapshots)
- [Choosing a compiler instrumentation method](#choosing-a-compiler-instrumentation-method)
- [Integrating funtrace into your build system](#integrating-funtrace-into-your-build-system)
- [Culling overhead with `funcount`](#culling-overhead-with-funcount)
- [Decoding traces](#decoding-traces)
- [Compile time & runtime configuration](#compile-time--runtime-configuration)
- [Controlling which functions are traced](#controlling-which-functions-are-traced)
- [Disabling & enabling tracing](#disabling--enabling-tracing)
- [Controlling buffer sizes & lifetimes](#controlling-buffer-sizes--lifetimes)
- [Limitations](#limitations)
- [Funtrace file format](#funtrace-file-format)
# Why funtrace?
* **Low overhead tracing** - FWIW, in my microbenchmark I get <10 ns per instrumented call or return
* **6x faster** than an LLVM XRay microbenchmark with "flight recorder logging" and 15-18x faster than "basic logging"
* **4.5x faster** than a uftrace microbenchmark (note that uftrace isn't just designed for a somewhat different workflow than funtrace - in that it's similar to XRay - but it also has many more features; [check it out](https://github.com/namhyung/uftrace)!)
* Supports **threads, shared libraries and exceptions**
* Supports ftrace events, showing **thread scheduling states** alongside function calls & returns, so you see when time is spent waiting as opposed to computing
* Works with **stock gcc or clang** - no custom compilers or compiler passes
* Easy to integrate into a build system, and even easier to **try *without* touching the build system** using tiny compiler-wrapping scripts “passing all the right flags”
* Small (just ~1K LOC for the runtime) and thus:
* **easy to port**
* **easy to extend** (say, to support some variant of “green threads”/fibers)
* **easy to audit** in case you’re reluctant to add something intrusive like this into your system without understanding it well (as I personally would be!)
* **Relatively comprehensive** – it comes with its own **tool for finding and cutting instrumentation overhead** in test runs too large to fully trace;
support for remapping file paths to locate debug information and source code; a way to **extract trace data from core dumps**, etc.
# Trying funtrace
You can clone the repo & build the trace decoder (or uinzip [a binary release](https://github.com/yosefk/funtrace/releases)), compile & run a simple example program, and decode its output traces as follows:
``` shell
# clone the source...
git clone https://github.com/yosefk/funtrace
# ...or unzip a binary release
unzip funtrace.zip
cd funtrace
./simple-example/build.sh
./simple-example/run.sh
```
This actually tests 4 different instrumented builds - 2 with gcc and 2 with clang; we'll discuss below how to choose the best method for you. Troubleshooting:
* With an older clang, you'll get `clang: error: unknown argument: '-fxray-shared'` - in that case, you can use 3 instrumentation methods out of the 4.
* You might have issues accessing ftrace data. This is not a problem for _function tracing_ but it prevents _thread state tracing_, which could tell us when threads are running and when they're waiting:
```
WARNING: funtrace - error initializing ftrace (...), compile with -DFUNTRACE_FTRACE_EVENTS_IN_BUF=0
or run under `env FUNTRACE_FTRACE_EVENTS_IN_BUF=0` if you don't want to collect ftrace / see this warning
```
You can ignore this message, or disable ftrace as described in the message, or you can try making ftrace work. The problem is usually permissions, and one way to make ftrace usable permissions-wise is **`sudo chown -R $USER /sys/kernel/tracing`**. Inside containers, things are more involved, and you might want to consult a source knowing more than this guide.
You can view the traces produced from the simple example above as follows:
```
pip install viztracer
rehash
vizviewer out/funtrace-fi-gcc.json
vizviewer out/funtrace-pg.json
vizviewer out/funtrace-fi-clang.json
vizviewer out/funtrace-xray.json
```
Funtrace uses [viztracer](https://github.com/gaogaotiantian/viztracer) for visualizing traces, in particular because of its ability to show source code, unlike stock [Perfetto](https://perfetto.dev/) (the basis for vizviewer.)
To build your own program with tracing enabled, you can use `compiler-wrappers/funtrace-pg-g++`, `compiler-wrappers/funtrace-finstr-clang++` or the other two compiler wrapper scripts, just like `simple-example/build.sh` does. If the program uses autoconf/configure, you can set the `$CXX` env var to point to one of these scripts, and if it uses cmake, you can pass `-DCMAKE_CXX_COMPILER=/your/chosen/wrapper` to cmake.
Note that the compiler wrappers slow down the configuration stage, because they compile & link funtrace.cpp, and this is costly at build system config time if the build system compiles many small programs to test for compiler features, library availability and such. For the build itself, the overhead of compiling funtrace.cpp is lower, but might still be annoying if you use a fast linker like mold and are used to near-instantaneous linking. The good thing about the compiler wrappers is that they make trying funtrace easy; if you decide to use funtrace in your program, however, you will probably want to pass the required compiler flags yourself as described below, which will eliminate the build-time overhead of the compiler wrappers.
Once the program compiles, you can run it as usual, and then `killall -SIGTRAP your-program` (or `kill -SIGTRAP <pid>`) when you want to get a trace. The trace will go to `funtrace.raw`; if you use SIGTRAP multiple times, many trace samples will be written to the file. Now you can run `funtrace2viz` the way `simple-example/run.sh` does. You get the funtrace2viz binary from `funtrace.zip`; if you cloned the source repo, you should have funtrace2viz compiled if you ran `simple-example/build.sh`. funtrace2viz will produce a vizviewer JSON file from each trace sample in funtrace.raw, and you can open each JSON file in vizviewer.
Troubleshooting vizviewer issues:
* If you see **`Error: RPC framing error`** in the browser tab opened by vizviewer, **reopen the JSON from the web UI**. (Note that you want to run vizviewer on every new JSON file, _even if_ it gives you "RPC framing error" when you do it - you _don't_ want to just open the JSON from the web UI since then you won't see source code!)
* If **the timeline looks empty**, it's likely due to some mostly-idle threads having very old events causing the timeline to zoom out too much. (You can simply open the JSON with `less` or whatever - there's a line per function call; if the JSON doesn't look empty, funtrace is working.) **Try passing `--max-event-age` or `--oldest-event-time` to funtrace2viz**; it prints the time range of events recorded for each thread in each trace sample (by default, the oldest event in every sample gets the timestamp 0) and you can use these printouts to decide on the value of the flags. In the next section we'll discuss how to take snapshots at the time you want, of the time range you want, so that you needn't fiddle with flags this way.
If you build the program, run it, and decode its trace on the same machine/in the same container, life is easy. If not, note that in order for funtrace2viz to work, you need the program and its shared libraries to be accessible at the paths where they were loaded from _in the traced program run_, on the machine _where funtrace2viz runs_. And to see the source code of the functions (as opposed to just function names), you need the source files to be accessible on that machine, at the paths _where they were when the program was built_. If this is not the case, you can remap the paths using a file called `substitute-path.json` in the current directory of funtrace2viz, as described below.
As a side note, if you don't like having to remap source file paths - not just in funtrace but eg in gdb - see [refix](https://github.com/yosefk/refix) which can help to mostly avoid this.
Note that if you choose to try XRay instrumentation (`compiler-wrappers/funtrace-xray-clang++`), you need to run with `env XRAY_OPTIONS="patch_premain=true"` like simple-examples/run.sh does. With the other instrumentation options, tracing is on by default.
The above is how you can give funtrace a quick try. The rest tells how to integrate it in your program "for real."
# Runtime API for taking & saving trace snapshots
The next thing after trying funtrace with SIGTRAP is probably using the runtime API to take snapshots of interesting time ranges. (Eventually you'll want proper build system integration - but you probably want to "play some more" beforehand, and since snapshots taken with SIGTRAP aren't taken at "the really interesting times" and capture too much, you'll want to see better snapshots.)
The recommended method for taking & saving snapshots is:
* using `funtrace_time()` to find unusually high latency in every flow you care about
* ...then use `funtrace_pause_and_get_snapshot_starting_at_time()` to capture snapshots when a high latency is observed
* ...finally, use `funtrace_write_snapshot()` when you want to save the snapshot(s) taken upon the highest latencies
In code, it looks something like this:
```c++
#include "funtrace.h"
void Server::handleRequest() {
uint64_t start_time = funtrace_time();
doStuff();
uint64_t latency = funtrace_time() - start_time;
if(latency > _slowest) {
funtrace_free_snapshot(_snapshot);
_snapshot = funtrace_pause_and_get_snapshot_starting_at_time(start_time);
_slowest = latency;
}
}
Server::~Server() {
funtrace_write_snapshot("funtrace-request.raw", _snapshot);
funtrace_free_snapshot(_snapshot);
}
```
There's also `funtrace_pause_and_get_snapshot_up_to_age(max_event_age)` - very similar to `funtrace_pause_and_get_snapshot_starting_at_time(start_time)`; and if you want the full content of the trace buffers without an event age limit, there's `funtrace_pause_and_get_snapshot()`. And you can write the snapshot straight from the threads' trace buffers to a file, without allocating memory for a snapshot, using `funtrace_pause_and_write_current_snapshot()` (this is exactly what the SIGTRAP handler does.)
As implied by their names, **all of these functions pause tracing until they're done** (so that traced events aren't overwritten with new events before we have the chance to save them.) This means that, for example, a concurrent server where `Server::handleRequest()` is called from multiple threads might have a gap in one of the snapshots taken by 2 threads at about the same time; hopefully, unusual latency in 2 threads at the same time is rare, and even if does happen, you'll get at least one good snapshot.
All of the snapshot-saving functions write to files; an interface for sending the data to some arbitrary stream could be added given demand.
Finally, a note on the time functions:
* `funtrace_time()` is a thin wrapper around `__rdtsc()` so you needn't worry about its cost
* `funtrace_ticks_per_second()` gives you the TSC frequency in case you want to convert timestamps or time diffs to seconds/ns
# "Coretime API" for saving trace snapshots
While we're on the subject of snapshots - you can get trace data from a core dump by loading `funtrace_gdb.py` from gdb - by running `gdb -x funtrace_gdb.py`, or using the gdb command `python execfile("funtrace_gdb.py")`, or somewhere in `.gdbinit`. Then you'll get the extension command `funtrace` which works something like this:
```
(gdb) funtrace
funtrace: saving proc mappings
funtrace: core dump generated by `your-program arg1 arg2`
funtrace: thread 1287700 your-program - saving 1048576 bytes of data read from 0x7fb199c00000
funtrace: thread 1287716 child - saving 1048576 bytes of data read from 0x7fb17c200000
funtrace: saving 22 ftrace events
funtrace: done - decode with `funtrace2viz funtrace.raw out` and then view in viztracer (pip install viztracer) with `vizviewer out.json`
```
Basically it's what SIGTRAP would save to `funtrace.raw`, had it been called right when the core was dumped. Can be very useful to see what the program was doing right before it crashed.
# Choosing a compiler instrumentation method
Once you have snapshots of the right time ranges, you might want to settle on a particular compiler instrumentation method. For that, the below can be helpful as well as the next section, which talks about culling overhead with the `funcount` tool (one thing which will help you choose the instrumentation method is how much overhead it adds, which differs between programs, and funcount can help estimate that overhead.)
Funtrace relies on the compiler inserting hooks upon function calls and returns. Funtrace supports 4 instrumentation methods (2 for gcc and 2 for clang), and comes with a compiler wrapper script passing the right flags to use each:
* **funtrace-finstr-g++** - gcc with `-finstrument-functions`
* **funtrace-pg-g++** - gcc with `-pg -mfentry -minstrument-return=call`
* **funtrace-finstr-clang++** - clang with `-finstrument-functions`
* **funtrace-xray-clang++** - clang with `-fxray-instrument`
**"By default," the method used by funtrace-pg-g++ and funtrace-finstr-clang++ is recommended for gcc and clang, respectively**. However, for each compiler, there are reasons to use the other method. Here's a table of the methods and their pros and cons, followed by a detailed explanation:
Method | gcc -finstr | gcc -pg | clang -finstr | clang XRay
--- | --- | --- | --- | ---
before or after inlining? | ❌ before | ✅ after | ✅✅ before or after! | ✅ after
control tracing by source path | ✅ yes | ❌ no | ❌ no | ❌ no
control tracing by function length | ✅ asm | ✅ asm | ✅ asm | ✅✅ compiler
control tracing by function name list | ✅ asm | ✅ asm | ✅ asm | ❌ no
tail call artifacts | ✅ no | ❌ yes | ✅ no | ❌ yes
untraced exception catcher artifacts | ✅ no | ❌ yes | ❌ yes | ❌ yes
needs questionable linker flags | ✅ no | ❌ yes | ✅ no | ❌ yes
We'll now explain these items in detail, and add a few points about XRay which "don't fit into the table."
* **Instrument before or after inlining?** You usually prefer "after" - "before" is likely to hurt performance too much (and you can use the NOFUNTRACE macro to suppress the tracing of a function, but you'll need to do this in too many places.) Still, instrumenting before inlining has its uses, eg you can trace the program flow and follow it in vizviewer - for an interactive and/or multithreaded program, this might be easier than using a debugger or an IDE. clang -finstrument-functions is the nicest here - it instruments before inlining, but has a sister flag -finstrument-functions-after-inlining that does what you expect.
* **Control tracing by source path** - gcc's `-finstrument-functions-exclude-file-list=.h,.hpp,/usr/include` (for example) will disable tracing in functions with filenames having the substrings on the comma-separated list. This can somewhat compensate for -finstrument-functions instrumenting before inlining, and you might otherwise use this feature for "targeted tracing."
* **Control tracing by function length** - XRay has `-fxray-instruction-threshold=N` which excludes short functions from tracing, unless they have loops that XRay assumes will run for a long time. For other instrumentation methods, funtrace comes with its own flag, `-funtrace-instr-thresh=N`, which is implemented by post-processing the assembly code produced by the compiler (funtrace supplies a script, `funtrace++`, which calls the compiler with `-S` instead of `-c` and then post-processes the assembly output and assembles it to produce the final `.o` object file.) XRay's method has 2 advantages, however. Firstly, it removes 100% of the overhead, while funtrace's method removes most (the on-entry/return hooks aren't called), but not all overhead (some extra instructions will appear relatively to the case where the function wasn't instrumented by the compiler in the first place.) Secondly, while the rest of funtrace is very solid, this bit is "hacky"/somewhat heuristical text processing of your compiler-generated assembly, and while it "seems to work" on large programs, you might have reservations against using this in production.
* **Control tracing by function name list** - for all methods other than XRay instrumentation, funtrace provides the flags `-funtrace-do-trace=file` and `-funtrace-no-trace=file` which let you specify which functions to exclude - or not to exclude - from tracing during assembly postprocessing (if you decide to use this postprocessing, of course.) This is nice for functions coming from .h files you cannot edit (and thus can't add the `NOFUNTRACE` attribute to the functions you want to exclude); it can also be nice to take a bunch of "frequently callees" reported by the funcount tool (described below) and suppress them using a list of mangled function names, instead of going to the source location of each and adding `NOFUNTRACE` there, especially during experimentation where you trying to check what suppressing this or that does for the overhead. This doesn't work for XRay ATM (assembly postprocessing could probably be implemnted for XRay but would require editing compiler-generated metdata used by the XRay runtime.)
* **Tail call artifacts** is when f calls g, the last thing g does is calling h, and instead of seeing f calling g _which calls h_, you see f calling g _and then h_. This happens because the compiler calls the "on return" hook from g before g's tail call to h. An annoyance if not a huge deal.
* **Untraced exception catcher artifacts** is when you have a function with a `try/catch` block _and_ tracing is disabled for it. In such a case, when an exception is thrown & caught, it looks like _all_ the functions returned and you start from a freshly empty call stack - instead of the correct picture (returning to the function that caught the exception.) This artifact comes from most instrumentation methods not calling the "on return" hook when unwinding the stack. This annoyance is avoided as long as you enable tracing for functions catching exceptions (in which case funtrace traces enough info to get around the return hook not being called upon unwinding.)
* **Questionable linker flags**:
* **clang XRay requires --allow-multiple-definition**. That's because funtrace needs to redefine XRay's on-call/on-return hooks, and there doesn't seem to be another way to do it. If XRay defines its hooks as "weak", this flag will no longer be needed.
* **gcc -pg _precludes_ -Wl,--no-undefined**. That's because its on-return hook, `__return__`, doesn't have a default definition (though its on-entry hook, `__fentry__`, apprently does, as do the entry/return hooks called by -finstrument-functions); your shared objects will get it from the executable but they won't link with `-Wl,--no-undefined`. Note that _all_ the wrappers filter out `-Wl,--no-undefined` so that shared libraries can use the `funtrace_` runtime APIs exported by the executable. However, you don't have to use the runtime APIs in shared objects - you can take snapshots only from code linked into the executable - so except for the -pg mode, this flag is not strictly necessary.
A few more words about XRay:
* **XRay instrumentation was enabled in shared libraries in late 2024** and is not yet available in officially released versions. clang versions with XRay shared library support have the `-fxray-shared` flag.
* **XRay uses dynamic code patching for enabling/disabling tracing at runtime.** This is why tracing is off unless you run under `env XRAY_OPTIONS="patch_premain=true"`, or use XRay's runtime APIs to patch the code. Funtrace has its own API, `funtrace_enable/disable_tracing()`, but it deliberately _doesn't_ call XRay's code-patching APIs. Funtrace's API is a quick way to cut most of the overhead of tracing without any self-modifying code business. It's up to you to decide, if you use XRay, whether you want to cut even more overhead by using runtime patching - downsides include creating copies of the code pages, for which you might not have the extra space, and taking more time than funtrace_enable/disable_tracing().
# Integrating funtrace into your build system
You can postpone "real" build system integration for as long as you want, if the compiler wrappers don't slow things down too much for you.
Once you do want to integrate funtrace into your build system, the short story is, **choose an instrumentation method and then compile in the way the respective wrapper in compiler-wrappers does.** However, here are some points worth noting explicitly:
* **It's fine to compile funtrace.cpp with its own compilation command.** You probably don't want to compile funtrace.cpp when linking your binary the way the wrappers do. They only do it to save you the trouble of adding funtrace.cpp to the list of files for the build system to build (which is harder/more annoying than it sounds, if you're trying to trace someone else's program with a build system you don't really know.)
* **It's best to compile funtrace.cpp without tracing, but "it can handle" being compiled with tracing.** Many build systems make it hard to compile a given file with its own compiler flags. funtrace.cpp uses NOFUNTRACE heavily to suppress tracing; the worst that can happen if you compile it with tracing is that some of its code will be traced despite its best efforts, but it should otherwise work.
* **funtrace.cpp must be compiled _into the executable_, not any of the shared libraries.** Funtrace uses TLS (thread-local storage) and accessing a `thread_local` object is a simple register+offset access when you link the code into an executable, but requires a function call if you link the code into a shared library, because now you need to find _this shared library's TLS area_. So funtrace puts its on-entry/return hooks into the executable, which exports them to the shared libraries.
* **Linker flag requirements** (XRay/`--allow-multiple-definition`, -pg/`-Wl,--no-undefined`) are documented in the previous section; for XRay, you also **need a linker wrapper** like `compiler-wrappers/xray/ld` to make sure funtrace's on-entry/return hooks from funtrace.o are passed before XRay's own hooks on the linker command line.
* **Pass -pthread** or things will break annoyingly
* **-Wl,--dynamic-list=funtrace.dyn** exports the funtrace runtime API from the executable for the shared libraries
* **-g is for source line info** (it's generally a good idea to use -g in release builds and not just debug builds - if it slows down linking, mold takes care of that; but, if you don't want to compile with -g, funtrace will still give you the function names using the ELF symbol table, only the source code will be missing from vizviewer)
* **Do _not_ pass -pg _to the linker_** - if you use gcc with -pg, and do pass it to the linker, the linker will think that you're compiling for gprof (even if you also pass `-mfentry -minstrument-return=call` which are guaranteed to break gprof, -pg's original application...), and then your program will write a useless gmon.out file in the current directory every time you run it.
* **Some flags in the wrappers are "defaults" that you can change**, specifically:
* `g++ -finstrument-functions-exclude-file-list=.h,.hpp,/usr/include` - of course you can pass a different exclude list
* `clang++ -finstrument-functions-after-inlining` - you can instead pass -finstrument-functions to instrument before inlining
* `-fxray-instruction-threshold=...` is _not_ passed by the XRay wrapper - you can set your own threshold
* **Link the program as C++** - even if it's a C program, the funtrace runtime is in C++ and you'll need to link with g++ or clang++ for things to work
All the compiler wrappers execute `compiler-wrappers/funtrace++`, itself a compiler wrapper which implements a few flags - `-funtrace-instr-thresh=N`, `-funtrace-ignore-loops`, `-funtrace-do-trace=file`, and `-funtrace-no-trace=file` - for controlling which function get traced, by changing the assembly code produced by the compiler. If you don't need any of these flags, you needn't prefix your compilation command with `funtrace++` like the wrappers do. (Funtrace needn't touch the code generated by the compiler for any reason other than supporting these flags.)
# Culling overhead with `funcount`
If tracing slows down your program too much, you might want to exclude some functions from tracing. You can do this on some "wide basis", such as "no tracing inside this bunch of libraries, we do compile higher-level logic to trace the overall flow" or such. You can also use `-fxray-instruction-threshold` or `-funtrace-instr-thresh` to automatically exclude short functions without loops. But you might also want to do some "targeted filtering" where you **find functions called very often, and exclude those** (to save both cycles and space in the trace buffer - with many short calls, you need a much larger snapshot to see far enough into the past.)
`funcount` is a tool for counting function calls, which is recommended for finding "frequent callees" to exclude from traces. Funcount is:
* **Fast** (about as fast as funtrace and unlike the very slow callgrind)
* **Accurate** (unlike perf which doesn't know how many time a function was called, only how many cycles were spent there and only approximately with its low frequenchy sampling)
* **Thread-safe** (unlike gprof which produces garbage call counts with multithreaded programs)
* **Small** (~300 LOC) and easy to port
Finally, funcount **counts exactly the calls funtrace would trace** - nothing that's not traced is counted, and nothing that's traced is left uncounted.
You enable funcount by passing `-DFUNTRACE_FUNCOUNT` on the command line (only `funtrace.cpp` and `funtrace_pg.S` need this -D, you don't really need to recompile the whole program), or by compiling & linking `funcount.cpp` and `funcount_pg.S` instead of `funtrace.cpp` and `funtrace_pg.S` into your program - whichever is easier in your build system. If the program runs much slower than with funtrace (which can be very slow if you instrument before inlining but otherwise is fairly fast), it must be multithreaded, with the threads running the same concurrently and fighting over the ownership of the cache lines containing the call counters maintained by funcount. You can compile with `-DFUNCOUNT_PAGE_TABLES=16` or whatever number to have each CPU core update its own copy of each call counter, getting more speed in exchange for space (not that much space - each page table is at worst the size of the executable sections, though on small machines this might matter.)
At the end of the run, you will see the message:
`function call count report saved to funcount.txt - decode with funcount2sym to get: call_count, dyn_addr, static_addr, num_bytes, bin_file, src_file:src_line, mangled_func_name`
`funcount2sym funcount.txt` prints the columns described in the message to standard output; the most commonly interesting ones are highlighted in bold:
* **`call_count` - the number of times the function was called**
* `dyn_addr` - the dynamic address of the function as loaded into the process (eg what you'd see in `gdb`)
* `static_addr` - the static address of the function in the binary file (what you'd see with `nm`)
* `num_bytes` - the number of bytes making up the function, a proxy for how many instructions long it is
* `bin_file` - the executable or shared library containing the function
* **`src_file:src_line` - the source file & line where the function is defined**, separated by ":"
* **`mangled_func_name` - the mangled function name**; you can pipe funcount2sym through `c++filt` to demangle it, though often you will want the mangled name
You can sort this report with `sort -nr` and add reports from multiple runs together with `awk`. To exclude frequently called functions from tracing, you can use the `NOFUNTRACE` attribute (as in `void NOFUNTRACE myfunc()`); `#include "funtrace.h"` to access the macro. You can also use the `-funtrace-no-trace=file` flag implemented by `funtrace++`, and pass it a file with a list of _mangled_ function names. See also "Disabling and enabling tracing" below. This might be faster than opening every relevant source file and adding `NOFUNTRACE` to every excluded function definition, and it avoids issues where the function attribute doesn't exclude the function for whatever reason.
The advantage of the NOFUNTRACE attribute, apart from being kept together with the function definition (so you know easily what's traced and what's not), is that the overhead is **fully** removed, whereas `-funtrace-no-trace=file` only removes most of the overhead - it removes the calls to the entry/exit hooks, but the code is still "scarred" by the code having been generated. This is a small fraction of the overhead but if lots and lots of functions are "scarred" this way, it can add up.
If the source files aren't where the debug info says they are, and/or the executable or shared objects are not where they were when the process was running, you can use `substitute-path.json` in the current directory of `funcount2sym` same as with `funtrace2viz`, as described in the next section.
# Decoding traces
`funtrace2viz funtrace.raw out` will produce an `out.json`, `out.1.json`, `out.2.json` etc. per trace sample in the file. (The snapshot-saving functions only put one sample into a file; the `funtrace.raw` file appended to by SIGTRAP and its programmatic equivalent can contain multiple samples.)
If funtrace2viz can't find some of the source files or binaries it needs, it will print warnings; you can make it find the files using a `substitute-path.json` in its current directory. This JSON file should contain an array of arrays of length 2, for example:
``` json
[
["/build/server/source-dir/","/home/user/source-dir/"],
["/deployment/machine/binary-dir/","/home/user/binary-dir/"],
]
```
For every path string, funtrace2viz iterates over every pair in the array, replacing every occurence of the first string with the second string in the pair.
Command line flags:
* `-r/--raw-timestamps`: report the raw timestamps, rather than defining the earliest timestamp in each sample as 0 and counting from there
* `-e/--executable-file-info`: on top of a function's name, file & line, show the binary it's from and its static address
* `-m/--max-event-age`: ignore events older than this age; this is most likely to be useful for SIGTRAP-type snapshots where you have very old events from mostly idle threads and they cause the GUI timeline to zoom out so much you can't see anything. You can guess what the age is in part by looking at the printouts of funtrace2viz which tells the time range of the events traced from each thread
* `-e/--oldest-event-time`: like `--max-event-age` but with the threshold defined as a timestamp instead of age
* `-t/--threads`: a comma-separated list of thread TIDs - threads outside this list are ignored (including for the purpose of interpreting `--max-event-age` - if you ignore the thread with the most recent event, then the most recent event from threads you didn't ignore becomes "the most recent event" for age calculation purposes.) This is also something that's mostly useful for SIGTRAP-type snapshots to exclude mostly idle threads
* `-s/--samples`: a comma-separated list of sample indexes - samples outside this list are ignored. Useful for the multi-sample `funtrace.raw` file appended to by SIGTRAP
* `-d/--dry`: useful for a very large multi-sample `funtrace.raw` file if you want to decide what samples to focus on; this prints the time ranges of the threads in each sample, but doesn't decode anything (decoding runs at a rate of about 1MB of binary data per second)
# Compile-time & runtime configuration
## Controlling which functions are traced
Control at function granularity is only available at build time, as follows:
* **Compiler function attributes**:
* `NOFUNTRACE` - a function attribute excluding a function from tracing (eg `void NOFUNTRACE func()` - this is the `__attribute__((...))` syntax of gcc/clang).
* `DOFUNTRACE` - a function attribute forcing the inclusion of a function in tracing - currently only meaningful for XRay, which might otherwise exclude functions due to the `-fxray-instruction-threshold=N` flag
* **Assembly filtering flags**: if you use the `funtrace++` wrapper around g++/clang++ in your build system (which you'd want to do solely to get the flags below), you get the option to filter compiler-generated assembly code to exclude some functions from tracing; this is convenient with foreign code (eg functions in standard or external library header files) as well as "to cast a wide net" based on function length a-la XRay's `-fxray-instruction-threshold=N` (_note that assembly filtering is not supported with XRay_):
* `-funtrace-do-trace=file` - the file should contain a list of whitespace-separated mangled function names, these functions will NOT excluded from tracing
* `-funtrace-no-trace=file` - the file should contain a list of whitespace-separated mangled function names, these functions WILL be excluded from tracing
* `-funtrace-instr-thresh=N` - functions with less than N instructions will be excluded from tracing together with function calls inlined into them, UNLESS they have loops
* `-funtrace-ignore-loops` - if -funtrace-instr-thresh=N was passed, functions with less than N instructions will be excluded from tracing together with function calls inlined into them, EVEN IF they have loops
There are thus several ways to ask to include or exclude a function from tracing; what happens if they conflict?
* NOFUNTRACE "always wins" (unless there's a compiler issue where it's ignored for whatever reason) - you can't trace a function successfully excluded with NOFUNTRACE
* DOFUNTRACE currently only means the function will survive XRay filtering; it does nothing for other instrumentation methods, so the function might be exluded from tracing with these methods (eg by -finstrument-functions-after-inling or -finstrument-functions-exclude-file-list)
* For functions which "survived exclusion by the compiler":
* A function on the list passed to -funtrace-do-trace is always kept
* Otherwise, a function on the list passed to -funtrace-no-trace is excluded, and so are function calls inlined into it
* Otherwise, a function with less than N instructions where N was defined with -funtrace-instr-thresh=N and has no loops is excluded, and so are function calls inlined into it. If it has loops but -funtrace-ignore-loops was passed, it is also excluded, and so are function calls inlined into it.
## Disabling & enabling tracing
* `funtrace_ignore_this_thread()` excludes the calling thread from tracing "forever" (there's currently no way to undo this)
* `funtrace_disable_tracing()` disables tracing globally (note that taking a snapshot effectively does the same thing until the snapshot is ready)
* `funtrace_enable_tracing()` (re-)enables the tracing globally (by default, tracing is on when the program starts so you needn't do it; "on by default" means you can get a trace from a core dump and from a live process with SIGTRAP without any tweaking to the program source)
Additionally, compiling with -DFUNTRACE_FTRACE_EVENTS_IN_BUF=0 or setting $FUNTRACE_FTRACE_EVENTS_IN_BUF to 0 at runtime effectively disables ftrace scheduling event tracing, as mentioned again in the next section.
## Controlling buffer sizes & lifetimes
* `funtrace_set_thread_log_buf_size(log_buf_size)` sets the trace buffer size of the calling thread to `pow(2, log_buf_size)`. Passing 0 (or a value smaller than log(size of 2 trace entries), so currently 5) is equivalent to calling `funtrace_ignore_this_thread()`
* The following parameters can be controlled by passing `-DNAME=VALUE` to the compiler (the command line equivalent of `#define NAME VALUE`), and/or reconfigured at runtime by setting the environment variable `$NAME` to `VALUE`:
* `FUNTRACE_LOG_BUF_SIZE`: each thread starts with a thread-local trace buffer of this size (the default is 20, meaning 1M bytes = 32K trace entries ~= 16K most recent function calls.) This initial buffer size can then be changed using `funtrace_set_thread_log_buf_size()`
* `FUNTRACE_FTRACE_EVENTS_IN_BUF`: the number of entries in this process's userspace ftrace buffer (the default is 20000; the size in bytes can vary since each entry keeps one line of textual ftrace data.) Passing `-DFUNTRACE_FTRACE_EVENTS_IN_BUF=0` disables ftrace at compile time - this **cannot** be changed by setting the env var at runtime to a non-zero value.
* `FUNTRACE_GC_MAX_AGE_MS`: when set to 0, a thread's thread-local trace buffer is freed upon thread exit - which means the trace data will be missing from future snapshots, even though the events in that buffer might have been recorded during the time range covered by the snapshot. When set to a non-zero value (default: 300 ms), thread trace buffers are kept after thread exit, and garbage-collected every FUNTRACE_GC_PERIOD_MS (see below); only buffers with age exceeding FUNTRACE_GC_MAX_AGE_MS are freed. Passing `-DFUNTRACE_GC_MAX_AGE_MS` disables garbage collection at compile time - this **cannot** be changed by setting the env var at runtime to a non-zero value.
* `FUNTRACE_GC_PERIOD_MS`: unless compiled out by #defining FUNTRACE_GC_MAX_AGE_MS to 0, the thread trace buffer garbage collection runs every FUNTRACE_GC_PERIOD_MS ms (default: the compile-time value of FUNTRACE_GC_MAX_AGE_MS.)
# Limitations
* **Can't trace inside shared libraries unless they're loaded by an executable containing the funtrace runtime** - for example, a Python extension module written in C++ can't be traced, similarly to any other kind of plugin loaded by a program not compiled with funtrace. This is because of the TLS issue explained above.
* **Thread creation/exit and saving a trace snapshot take the same lock** - this can slow things down; hopefully not too badly since saving a snapshot is pretty fast, and creating lots of threads at runtime (rather than reusing from a thread pool) should be rare
* **ftrace / thread scheduling events might have issues near the snapshot time range boundaries**:
* Perfetto might not render thread status very clearly near the boundaries even when it's clear from the ftrace log
* There's a latency between a thread scheduling event and the moment it's delivered to funtrace's userspace thread collecting the events (we try to give this thread a high priority but will typically lack permissions to give it a real-time priority.) One way around this could be *a mechanism for "late delivery" of ftrace events into snapshots* - since most of the time, snapshots are written to the file system much later than they're captured, we could put ftrace events into those already-captured, but not-yet-written-out snapshots whose time range contains a given newly arrived event. Doable, but a bit of a hassle, could be done given demand.
* **Threads which exited by the time a snapshot was taken might be invisble in the trace** - unless the thread trace GC parameters were tuned such that the trace buffer is still around when the snapshot is taken, as explained above
* **Funcount misses constructor calls** - shouldn't matter for its goal of finding functions called so often that you want to exclude them from tracing to avoid the overhead
* **Overlapping time ranges** should never happen but might in some cases. The Perfetto/Chromium JSON spec requires events' time ranges to be nested within each other or not overlap at all. funtrace2viz takes this requirement seriously (rather than breaking it on the currently seemingly correct theory that some ways of breaking it are actually supported.) So when funtrace2viz observes that 20 functions have just returned (by seeing that f which called 19 functions has just returned, perhaps because of a longjmp or an exception being caught), it produces 20 different timestamps apart by at least 1 ns, the smallest time unit in the JSON. Some of these made-up return timestamps might cause overlap with later function calls.
* **Tail call artifacts** with some instrumentation methods, as documented in the section "Choosing compiler instrumentation"
* **Untraced exception catcher artifacts** with some instrumentation methods, as documented in the section "Choosing compiler instrumentation." A related but likely extremely rare artifact you might see with these instrumentation methods is mixing recursion and exception handling where you have a recursive function that doesn't catch an exception at the innermost recursion level but then does catch it at another level - funtrace trace analysis will incorrectly assume the exception was caught at the innermost level (unless `gcc -finstrument-functions` was used, which calls the on-return hook when unwinding the stack and doesn't require guesswork at trace analysis time.)
* **Unloading traced shared libraries within the time range of a snapshot is unsupported** - a trace snapshot contains an address space snapshot made at the end of the time range, so if a shared library was unloaded, functions traced from it will not be decodable in the trace; reusing the executable address space for new addresses will mess up decoding further. A need to dlclose libraries midway thru the tracing is probably extremely rare.
* **Mixing instrumentation methods in the same build or process wasn't tested** and might not work for various reasons; this feels like a fairly esoteric need, but can almost certainly be made to work given demand.
# Funtrace file format
You don't need to know this format unless you want to generate or process `funtrace.raw` files, or extend funtrace for your needs.
Funtrace data is binary, using little endian encoding for integers. It consists of "chunks" where each chunk has an 8-byte magic number, a 64-bit size integer, and then a sequence of data bytes of the length specified by the size integer. Here are the chunk types and the format of the data:
* **`PROCMAPS`**: the content of `/proc/self/maps` can go here; only the start, end, offset and path fields are used, and only the executable segments are listed at this stage (funtrace uses `dl_iterate_phdr` rather than `/proc/self/maps` to speed up snapshotting), but readonly data segments might go here eventually, too, eg if we implement custom log messages with [delayed formatting](https://yosefk.com/blog/delayed-printf-for-real-time-logging.html). Only the start, end, offset and path fields are used; permissions and inode info are ignored.
* **`FUNTRACE`**: an 8-byte chunk indicating the start of a snapshot, with an 8-byte frequency of the timestamp counter, used to convert counter values into nanoseconds. A snapshot is interpreted according to the memory map reported by the last encountered `PROCMAPS` chunk (there may be many snapshots in the same file; currently the funtrace runtime saves a `PROCMAPS` chunk every time it takes a snapshot but if you know that your memory map remains stable over time and you want to shave off a little bit of latency, you could tweak this.)
* **`CMD LINE`**: the process command line, used as the process name when generating the JSON. A wart worth mentioning is that currently, the funtrace runtime reads this from `/proc/self/cmdline` and replaces null characters separating the arguments with spaces, which means that the shell command `prog "aaa bbb"`, which passes a single string argument `aaa bbb`, will be saved as `prog aaa bbb` (two string arguments). So we save enough to help you see "the trace of what you're looking at" but not enough to eg use the saved command line for reproducing the run.
* **`THREADID`**: a 64b PID integer, a 64b TID integer, and a null-terminated 16-byte name string (the content of `/proc/self/comm` aka the output of `pthread_getname_np(pthread_self(),...)`.) This precedes every `TRACEBUF` chunk (documented next.)
* **`TRACEBUF`**: a variable sized chunk of length which is a multiple of 16. It contains trace entries; each entry is a 64b code pointer, and a 64b timestamp counter value. The entries are _not_ sorted by the timestamp, for 2 reasons - they come from a cyclic buffer, and the funtrace writeout code is racy, so you can have rare cases of `new_entry, old_entry, new_entry` near the end of the cyclic buffer because one of the newest entries didn't make it into the buffer so you got a much older entry. So you need to sort the entries for processing, and you need to "defend" against missing events (meaning, you could see a return without a call or a call without a return; this is not just because of the raciness of the writeout but because the cyclic buffer ends before "the end of program execution" and starts after "the start of execution" and you can have various other niceties like longjmp.) The code pointer can have the following flags set in its high bits:
* `RETURN` (63): a return event, where the code pointer points into the returning function
* `RETURN_WITH_CALLER_ADDRESS` (62): a return event where the code pointer points _into the function we're returning to_. This unfortunate tracing artifact happens under XRay instrumentation; funtrace2viz mostly recovers the flow despite this. When this bit and the previous bit are both set, this is a `CATCH` event, and the code pointer points into the function that caught the exception.
* `CALL_RETURNING_UPON_THROW` (61): marks call events that will have a return event logged for them if an exception is thrown. Under most instrumentation methods this does not happen and so funtrace2viz guesses which functions effectively returned during stack unwinding. When it sees a call entry with this flag set, it knows that this function wouldn't return without logging a return event even if an exception was thrown, which prevents it from wrongly guessing that the function returned due to unwinding.
* **`FTRACETX`**: a variable-sized chunk containing textual ftrace data (one event per line - what you read from `/sys/kernel/tracing/trace_pipe`). The timestamps in this data and the trace entries from `TRACEBUF` are from the same time source.
* **`ENDTRACE`**: an zero-sized chunk marking the end of a snapshot.
|
https://github.com/klorfmorf/Goemon64Recomp
|
Goemon64Recomp
An unofficial PC port of Mystical Ninja Starring Goemon (Nintendo 64) achieved via static recompilation.
Languages: C++ (56.0%), C (37.1%), SCSS (4.0%), CMake (1.2%), CSS (1.0%), Objective-C++ (0.2%)
.github
.github
assets
assets
docs
docs
flatpak
flatpak
icons
icons
...
.gitignore
.gitignore
.gitmodules
.gitmodules
BUILDING.md
BUILDING.md
CMakeLists.txt
CMakeLists.txt
CMakeSettings.json
CMakeSettings.json
> README.md
# Goemon 64: Recompiled
Goemon 64: Recompiled is a project that uses [N64: Recompiled](https://github.com/Mr-Wiseguy/N64Recomp) to **statically recompile** "Mystical Ninja Starring Goemon" (and soon "Goemon's Great Adventure") into a native port with many new features and enhancements. This project uses [RT64](https://github.com/rt64/rt64) as the rendering engine to provide some of these enhancements.
### [Check out the latest release here](https://github.com/klorfmorf/Goemon64Recomp/releases/latest).
### **This repository and its releases do not contain game assets. The original game is required to build or run this project.**
## Table of Contents
* [System Requirements](#system-requirements)
* [Features](#features)
* [Plug and Play](#plug-and-play)
* [Fully Intact N64 Effects](#fully-intact-n64-effects)
* [Easy-to-Use Menus](#easy-to-use-menus)
* [High Framerate Support](#high-framerate-support)
* [Widescreen and Ultrawide Support](#widescreen-and-ultrawide-support)
* [Additional Control Options](#additional-control-options)
* [Low Input Lag](#low-input-lag)
* [Instant Load Times](#instant-load-times)
* [Linux and Steam Deck Support](#linux-and-steam-deck-support)
* [Planned Features](#planned-features)
* [FAQ](#faq)
* [Known Issues](#known-issues)
* [Building](#building)
* [Libraries Used and Projects Referenced](#libraries-used-and-projects-referenced)
## System Requirements
A GPU supporting Direct3D 12.0 (Shader Model 6) or Vulkan 1.2 is required to run this project. The oldest GPUs that should be supported for each vendor are:
* GeForce GT 630
* Radeon HD 7750 (the one from 2012, not to be confused with the RX 7000 series) and newer
* Intel HD 510 (Skylake)
A CPU supporting the AVX instruction set is also required (Intel Core 2000 series or AMD Bulldozer and newer).
If you have issues with crashes on startup, make sure your graphics drivers are fully up to date.
## Features
#### Plug and Play
Simply provide your copy of the North American version of the game in the main menu and start playing! This project will automatically load assets from the provided copy, so there is no need to go through a separate extraction step or build the game yourself. Other versions of the game may be supported in the future.
#### Fully Intact N64 Effects
A lot of care was put into RT64 to make sure all graphical effects were rendered exactly as they did originally on the N64. No workarounds or "hacks" were made to replicate these effects, with the only modifications to them being made for enhancement purposes such as widescreen support. This includes framebuffer effects like the grayscale cutscenes and the Deku bubble projectile, depth effects like the lens of truth, decals such as shadows or impact textures, accurate lighting, shading effects like the fire arrows and bomb explosions, and various textures that are often rendered incorrectly.
#### Easy-to-Use Menus
Gameplay settings, graphics settings, input mappings, and audio settings can all be configured with the in-game config menu. The menus can all be used with mouse, controller, or keyboard for maximum convenience.
#### High Framerate Support
Play at any framerate you want thanks to functionality provided by RT64! Game objects and terrain, texture scrolling, screen effects, and most HUD elements are all rendered at high framerates. By default, this project is configured to run at your monitor's refresh rate. You can also play at the original framerate of the game if you prefer. **Changing framerate has no effect on gameplay.**
**Note**: External framerate limiters (such as the NVIDIA Control Panel) are known to potentially cause problems, so if you notice any stuttering then turn them off and use the manual framerate slider in the in-game graphics menu instead.
#### Widescreen and Ultrawide Support
Any aspect ratio is supported, with most effects modded to work correctly in widescreen. The HUD can also be positioned at 16:9 when using ultrawide aspect ratios if preferred.
**Note**: Some animation quirks can be seen at the edges of the screen in certain cutscenes when using very wide aspect ratios.
#### Additional Control Options
Customize your experience by setting your stick deadzone to your liking.
#### Low Input Lag
This project has been optimized to have as little input lag as possible, making the game feel more responsive than ever!
#### Instant Load Times
Saving and loading files, going from place to place, and pausing all happen in the blink of an eye thanks to the game running natively on modern hardware.
#### Linux and Steam Deck Support
A Linux binary is available for playing on most up-to-date distros, including on the Steam Deck.
To play on Steam Deck, extract the Linux build onto your deck. Then, in desktop mode, right click the Goemon64Recompiled executable file and select "Add to Steam". From there, you can return to Gaming mode and configure the controls as needed. See the [Steam Deck gyro aim FAQ section](#how-do-i-set-up-gyro-aiming-on-steam-deck) for more detailed instructions.
## Planned Features
* Goemon's Great Adventure support
* Mod support and Randomizer
* Texture Packs
* Model Replacements
* Ray Tracing (via RT64)
## FAQ
#### What is static recompilation?
Static recompilation is the process of automatically translating an application from one platform to another. For more details, check out the full description of how this project's recompilation works here: [N64: Recompiled](https://github.com/Mr-Wiseguy/N64Recomp).
#### How is this related to the decompilation project?
Unlike N64 ports in the past, this project is not based on the source code provided by a decompilation of the game. This is because static recompilation bypasses the need for decompiled source code when making a port, allowing ports to be made **without source code**. However, the reverse engineering work done by the decompilation team was invaluable for providing some of the enhancements featured in this project. For this reason, the project uses headers and some functions from the decompilation project in order to make modifications to the game. Many thanks to the decompilation team for all of the hard work they've done.
#### How do I set up gyro aiming on Steam Deck?
This project provides mouse aiming as a way to allow using gyro on Steam Deck, as the Steam Deck's gyro sensors cannot be read directly. First, launch the game in Gaming Mode, press the Steam button and go to "Controller Settings". Choose "Controller Settings" again in the menu that follows, and then set "Gyro Behavior" to "As Mouse".

You'll probably also want to change the default behavior so that you don't need to be touching the right stick to allow gyro input. To do so, click on the Gear icon to the right of "Gyro Behavior" and ensure that "Gyro Activation Buttons" is set to "None Selected (Gyro Always On)." If this isn't the case, then select that option and then press "Select None" in the following menu.
#### Where is the savefile stored?
- Windows: `%LOCALAPPDATA%\Goemon64Recompiled\saves`
- Linux: `~/.config/Goemon64Recompiled/saves`
#### How do I choose a different ROM?
**You don't.** This project is **only** a port of the US version of "Mystical Ninja Starring Goemon". The expected format is .z64, though ROMs in formats will be automatically converted, as long as it is the correct ROM. **It is not an emulator and it cannot run any arbitrary ROM.**
If you want to play a modded ROM or in another language, note that support for modding and other languages will be added to the project itself in the future and will not rely on you supplying a different ROM.
## Known Issues
* Intel GPUs on Linux may not currently work. If you have experience with Vulkan development on Linux, help here would be greatly appreciated!
* The prebuilt Linux binary may not work correctly on some distributions of Linux. If you encounter such an issue, building the project locally yourself is recommended. A Flatpak or AppImage may be provided in the future to solve this issue. Adding the Linux version to Steam and setting "Steam Linux Runtime" as the compatibility tool or launching it via Gamescope may work around the issue. Alternatively, running the Windows version with Proton is known to work well and may also work around this issue.
* Overlays such as MSI Afterburner and other software such as Wallpaper Engine can cause performance issues with this project that prevent the game from rendering correctly. Disabling such software is recommended.
## Building
Building is not required to play this project, as prebuilt binaries (which do not contain game assets) can be found in the [Releases](https://github.com/klorfmorf/Goemon64Recomp/releases) section. Instructions on how to build this project can be found in the [BUILDING.md](BUILDING.md) file.
## Libraries Used and Projects Referenced
* [Zelda64Recomp](https://github.com/Zelda64Recomp/Zelda64Recomp) for the base upon which this project is built on
* [RT64](https://github.com/rt64/rt64) for the project's rendering engine
* [RmlUi](https://github.com/mikke89/RmlUi) for building the menus and launcher
* [lunasvg](https://github.com/sammycage/lunasvg) for SVG rendering, used by RmlUi
* [FreeType](https://freetype.org/) for font rendering, used by RmlUi
* [moodycamel::ConcurrentQueue](https://github.com/cameron314/concurrentqueue) for semaphores and fast, lock-free MPMC queues
* [Gamepad Motion Helpers](https://github.com/JibbSmart/GamepadMotionHelpers) for sensor fusion and calibration algorithms to implement gyro aiming
* [Ares emulator](https://github.com/ares-emulator/ares) for RSP vector instruction reference implementations, used in RSP recompilation
Special thanks to [Jingleboy of Goemon International](https://goemoninternational.com) for drawing the icon/background graphic of Goemon's head!
Special thanks to [thecozies](https://github.com/thecozies) for designing and helping implement the launcher and config menus!
|
https://github.com/0x16000/Bunix
|
Bunix
Monolithic Kernel Developed entirely from scratch, supporting the i386 processor
Languages: C (97.5%), Makefile (1.6%)
bin
bin
drivers
drivers
include
include
init
init
isodir/boot/grub
isodir/boot/grub
...
.gitignore
.gitignore
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
linker.ld
linker.ld
> README.md

# Bunix
**Bunix** is a Unix-like operating system developed entirely from scratch by a single developer.
Focusing on **performance**, **stability**, and **security**, Bunix is an ambitious project to build a modern OS from the ground up.
> 🚧 Development may slow down occasionally due to school and personal life commitments.
---
## 🖥️ System Requirements
- **Architecture Support**: Currently supports **i386** (32-bit).
Support for **x86_64** (64-bit) is planned for future releases.
- **Boot Method**: BIOS only (for now).
**UEFI** support is also planned, though it currently works with **UEFI CSM (Compatibility Support Module)**.
---
## 🤝 Contributing
Interested in contributing to Bunix? Awesome!
Here’s what we expect from contributors:
1. Write code (obviously 😄).
2. **Test your changes** and provide a screenshot or demo showing that it works.
3. Clearly explain what your contribution does (e.g., new syscalls, keyboard drivers, improvements).
4. Unverified or vague contributions will not be accepted.
---
## 🛠️ Building from Source
Make sure you have the required dependencies installed:
1. `sudo apt-get update && sudo apt-get install qemu-system nasm mtools gcc-multilib xorriso`
2. Build: `make`
3. After building a `bunix.iso` file will be provided in the project's root directory
## Build and Run
1. Execute: `make run`
# Future of Bunix
This is definitely Fun to Work on and Will improve over time!
We will have to look and see.
|
https://github.com/theialab/radfoam
|
radfoam
Original implementation of "Radiant Foam: Real-Time Differentiable Ray Tracing"
Languages: Cuda (49.8%), C++ (32.9%), Python (16.4%)
configs
configs
data_loader
data_loader
external
external
radfoam_model
radfoam_model
scripts
scripts
...
.clang-format
.clang-format
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
LICENSE
LICENSE
> README.md
# Radiant Foam: Real-Time Differentiable Ray Tracing

## Shrisudhan Govindarajan, Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi
This repository contains the official implementation of [Radiant Foam: Real-Time Differentiable Ray Tracing](https://radfoam.github.io).
The code includes scripts for training and evaluation, as well as a real-time viewer that can be used to visualize trained models, or optionally to observe the progression of models as they train. Everything in this repository is non-final and subject to change as the project is still being actively developed. **We encourage anyone citing our results to do as RadFoam (vx), where x is the version specified for those metrics in the paper or tagged to a commit on GitHub.** This should hopefully reduce confusion.
Warning: this is an organic, free-range research codebase, and should be treated with the appropriate care when integrating it into any other software.
## Known issues
- GPU memory usage can be high for scenes with many points. You may need to reduce the `final_points` setting to train outdoor scenes on a 24GB GPU. This will hopefully be improved the future.
- Best PSNR is acheived with the default softplus density activation, but also it causes an increase in volumetric artifacts. Using exponential activation may result in qualitatively better renders. We are planning to add configuration options for this.
- The Delaunay triangulation is not perfectly robust, and relies on random perturbation of points and iterative retries to attempt to recover from failures. Training may stall for long periods when this occurs.
## Getting started
Start by cloning the repository and submodules:
git clone --recursive https://github.com/theialab/radfoam
You will need a Linux environment with Python 3.10 or newer, as well as version 12.x of the [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads) and a CUDA-compatible GPU of Compute Capability 7.0 or higher. Please ensure that your installation method for CUDA places `nvcc` in your `PATH`. The following instructions were tested with Ubuntu 24.04.
After installing the CUDA Toolkit and initializing your python virtual environment, install PyTorch 2.3 or newer. For example, with CUDA 12.1:
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu121
From here, there are two options:
### Option 1: build with `pip install`
Choose this option if you want to run the code as-is, and do not need to make modifications to the CUDA/C++ code.
Simply run `pip install .` in the repository root. This will build the CUDA kernels and install them along with the python bindings into your python environment. This may take some time to complete, but once finished, you should be able to run the code without further setup.
Optionally if you want to install with the frozen version of required packages, you can do so by running `pip install -r requirements.txt` before running `pip install .`
### Option 2: build with CMake
Choose this option if you intend to modify the CUDA/C++ code. Using CMake directly will allow you to quickly recompile the kernels as needed.
First install the Python dependencies:
pip install -r requirements.txt
Then, create a `build` directory in the repository root and run the following commands from it to initialize CMake and build the bindings library:
cmake ..
make install
This will install to a local `radfoam` directory in the repository root. Recompilation can be performed by re-running `make install` in the build directory.
### Training
Place the [Mip-NeRF 360](https://jonbarron.info/mipnerf360) and [Deep Blending](https://github.com/Phog/DeepBlending) datasets in `data/mipnerf360` and `data/db`.
Training can then be launched with:
python train.py -c configs/<config_file>.yaml
Where `<config_file>` is either one of the supplied files in the `configs` directory or your own.
You can optionally include the `--viewer` flag to train interactively, or use the `viewer.py` script to view saved checkpoints.
### Evaluation
The standard test metrics can be computed with:
python test.py -c outputs/<checkpoint_directory>/config.yaml
Rendering speed can be computed with:
python benchmark.py -c outputs/<checkpoint_directory>/config.yaml
### Checkpoints
You can find trained checkpoints, as well as COLMAP output for some scenes [here](https://drive.google.com/drive/folders/1o8ulZORogwjrfsz3E-QY3f-oPjVFrEVI?usp=drive_link).
## BibTeX
@article{govindarajan2025radfoam,
author = {Govindarajan, Shrisudhan and Rebain, Daniel and Yi, Kwang Moo and Tagliasacchi, Andrea},
title = {Radiant Foam: Real-Time Differentiable Ray Tracing},
journal = {arXiv:2502.01157},
year = {2025},
}
|
https://github.com/c3l3si4n/pugdns
|
pugdns
An experimental high-performance DNS query bruteforce tool built with AF_XDP for extremely fast and accurate bulk DNS lookups.
Languages: Go (94.3%), C (4.7%), Shell (1.0%)
.vscode
.vscode
...
.gitignore
.gitignore
LICENSE.md
LICENSE.md
README.md
README.md
bpf.go
bpf.go
build.sh
build.sh
> README.md
# pugDNS 🐾

An experimental high-performance DNS query tool built with **AF_XDP** and **eBPF** for extremely fast and accurate bulk DNS lookups. pugDNS uses an eBPF filter to efficiently capture DNS responses directly in the kernel, complementing its high-speed query injection capabilities.
## Overview
pugDNS is designed for security researchers, network administrators, and penetration testers who need to perform DNS reconnaissance at scale. By leveraging AF_XDP sockets for packet transmission and eBPF for response capturing, pugDNS can send queries and process responses at rates significantly higher than traditional tools, making it ideal for domain discovery, enumeration, and validation tasks.
pugDNS will easily saturate your network link, so it's recommended to use this tool on a server with a high-speed internet connection and appropriate network configuration (e.g., ensuring the gateway MAC is correctly resolved or specified).
## Performance
pugDNS is designed to be as fast as possible. It uses AF_XDP sockets to directly inject DNS queries into the network driver (or kernel, depending on mode), bypassing much of the usual network stack. This allows it to send DNS queries and process responses with significantly better throughput and latency than traditional tools.
The following benchmarks were performed on an AX42 Hetzner server (AMD Ryzen™ 7 PRO 8700GE) with a 1Gbit/s port. Benchmarking pugDNS against other popular DNS tools, we observed the following results using a ~20k domain list:
*(Note: Benchmarks are indicative and can vary based on hardware, network conditions, and target nameservers. The original benchmarks were run on slightly different hardware but show the relative performance gains.)*
```bash
Benchmark 1: cat b.txt | dnsx -retry 5 -r resolvers.txt
Time (mean ± σ): 19.744 s ± 0.086 s [User: 2.908 s, System: 3.358 s]
Range (min … max): 19.634 s … 19.876 s 10 runs
Benchmark 2: cat b.txt | zdns A --retries 5 --name-servers @resolvers.txt >/dev/null
Time (mean ± σ): 19.036 s ± 1.214 s [User: 4.962 s, System: 2.022 s]
Range (min … max): 17.385 s … 21.283 s 10 runs
Benchmark 3: massdns -r resolvers.txt -s 12000 -c 5 b.txt >/dev/null
Time (mean ± σ): 1.299 s ± 0.243 s [User: 0.036 s, System: 0.137 s]
Range (min … max): 1.076 s … 1.583 s 10 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 4: ./pugdns -interface enp6s0 -nameservers resolvers.txt -retries 5 -domains b.txt -retry-timeout 500ms -maxbatch 300000 -output /dev/null
Time (mean ± σ): 776.8 ms ± 7.6 ms [User: 973.6 ms, System: 603.8 ms]
Range (min … max): 767.4 ms … 792.5 ms 10 runs
Summary
./pugdns -interface enp6s0 -nameservers resolvers.txt -retries 5 -domains b.txt -retry-timeout 500ms -maxbatch 300000 -output /dev/null ran
1.67 ± 0.31 times faster than massdns -r resolvers.txt -s 12000 -c 5 b.txt >/dev/null
24.50 ± 1.58 times faster than cat b.txt | zdns A --retries 5 --name-servers @resolvers.txt >/dev/null
25.42 ± 0.27 times faster than cat b.txt | dnsx -retry 5 -r resolvers.txt
```
Looking into the accuracy and number of responses that came back, we had the following numbers testing with a 19966 domain wordlist:
| Tool | Accuracy | Number of Responses |
| :------ | :------- | :------------------ |
| pugdns | 100% | 19969 |
| massdns | 99.994% | 19968 |
| zdns | 100% | 19969 |
| dnsx | 99.984% | 19966 |
## Features & Roadmap
- [x] High-speed DNS query transmission via AF_XDP raw sockets
- [x] Asynchronous architecture using dedicated goroutines for packet sending, response handling, and state management.
- [x] Multi-threaded processing of DNS responses via eBPF and a configurable worker pool (goroutines).
- [x] Support for multiple nameservers and large domain lists via input files
- [x] Efficient DNS response capturing using eBPF
- [x] Automatic query retries for unanswered domains
- [x] Kernel and user-space drop monitoring for observability
- [x] Configurable number of workers, retries, and poll timeouts
- [x] Interactive UI (default) or simple text output for progress
- [x] Results saved in JSON format
- [ ] Support for different DNS record types (AAAA, MX, etc.)
- [ ] IPv6 support
- [ ] Dynamic rate limiting options
## Command-Line Flags
```
Usage of pugdns:
-domain string
Single domain to query (when not using -domains file) (default "google.com")
-domains string
File containing domains to query (one per line)
-dstmac string
Destination MAC address (optional, uses ARP resolution if empty)
-interface string
Network interface to attach to
-maxbatch int
Maximum number of packets to send at once. Default is 128. I suggest not changing this. (default 128)
-nameservers string
File containing nameservers to use (one per line)
-output string
File to save results to (default "results.json")
-poll int
Poll timeout in milliseconds (default 1)
-queue int
The queue on the network interface to attach to
-retries int
Number of retries for each domain (default 3)
-srcip string
Source IP address (optional, uses interface IP if empty)
-srcmac string
Source MAC address (optional, uses interface MAC if empty)
-verbose
Enable verbose output
-workers int
Number of workers to use (default 1)
```
**Example Usage:**
```bash
# Query domains from domains.txt using nameservers from resolvers.txt on interface eth0
sudo ./pugdns -interface eth0 -domains domains.txt -nameservers resolvers.txt -output my_results.json
```
*(Note: Running with `sudo` or appropriate capabilities (`CAP_NET_ADMIN`, `CAP_NET_RAW`, potentially `CAP_SYS_ADMIN` for memlock/BPF) is typically required for AF_XDP and eBPF operations.)*
## Installing
If you don’t want to build pugdns from source and just want to test it out, simply download the pre-compiled binary from our [Releases page](https://github.com/c3l3si4n/pugdns/releases/). It will be easier and faster.
## Building from source
If you really want to build from source, here's a rough guide on how to do so:
1. **Clone the repository:**
```bash
git clone https://github.com/c3l3si4n/pugdns
cd pugdns
```
2. **Install Dependencies:** Ensure you have Go (>= 1.18 recommended) and Clang/LLVM (for eBPF compilation) installed. You may also need kernel headers (`linux-headers-$(uname -r)` on Debian/Ubuntu).
```
sudo apt install linux-headers-$(uname -r) llvm libbpf-dev clang; sudo ln -s /usr/include/x86_64-linux-gnu/asm /usr/include/asm;
```
3. **Generate eBPF code and Build:**
```bash
go generate && go build
```
This command first compiles the eBPF C code (`pugdns.c`) into an object file using `clang`, then embeds it into a Go file (`pugdns_bpf*.go`) using `bpf2go`, and finally builds the main Go application (`pugdns`).
4. **Run:**
```bash
sudo ./pugdns [flags...]
```
## Credits
- [cilium/ebpf](https://github.com/cilium/ebpf) - Core eBPF library for Go used for loading and interacting with BPF programs and maps.
- [slavc/xdp](https://github.com/slavc/xdp) - AF_XDP library for Go
- Libraries used for UI: `charmbracelet/bubbletea`, `charmbracelet/lipgloss`, `charmbracelet/bubbles`.
---
Feel free to open issues for bugs, feature requests, or questions! Contributions are welcome.
|
https://github.com/badhive/stitch
|
stitch
Rewrite and obfuscate code in compiled binaries
Languages: C++ (98.0%), CMake (2.0%)
.idea
.idea
assets
assets
deps
deps
examples
examples
include/stitch
include/stitch
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.gitignore
.gitignore
CMakeLists.txt
CMakeLists.txt
LICENSE.txt
LICENSE.txt
> README.md
## Stitch
Cross-platform C++ library for patching and obfuscating code in compiled binaries.
Code and binary parsing are logically separated in order to support as many
binary format + architecture combinations as possible.
Stitch currently works with the following binary formats:
- PE
### Supported architectures
#### x86
Stitch's x86 capability is heavily built on top of [zasm](https://github.com/zyantific/zasm).
Each `X86Function` provides a `zasm::Assembler` instance that is pre-populated
with the original instructions as they were on disk. The original instructions can
be accessed with `X86Function::GetOriginalCode`, and their positions within the assembler
can be accessed with `X86Inst::GetPos`.
### Examples + Use Cases
#### Binary manipulation
Stitch allows for functions to be safely edited post-compilation. Operands for
position-relative instructions, such as `jmp`s, are replaced with labels so that
the code can be easily modified and serialised without worry.
#### Obfuscation
The main and intended use for Stitch is code obfuscation on a binary level. It handles
the tedious task of injecting new data into files, so that operators can focus on more
complex obfuscation techniques, including but not limited to VM-based obfuscation.
Here's an example program that applies basic obfuscation (in the form of opaque predicates)
to a function (specified by its absolute address).
```c++
#include "stitch/binary/pe.h"
#include "stitch/target/x86.h"
const std::vector regs = {
zasm::x86::rdi,
zasm::x86::rsi,
zasm::x86::rcx,
zasm::x86::rdx,
zasm::x86::r8,
zasm::x86::r9,
zasm::x86::r10,
};
#include "stitch/binary/pe.h"
#include "stitch/target/x86.h"
const std::vector regs = {
zasm::x86::rdi,
zasm::x86::rsi,
zasm::x86::rcx,
zasm::x86::rdx,
zasm::x86::r8,
zasm::x86::r9,
zasm::x86::r10,
};
auto& getRandomReg() {
auto& reg = regs[rand() % regs.size()];
return reg;
}
int main() {
srand(time(nullptr));
stitch::PE pe("target/pe_branching.bin");
auto* code = dynamic_cast<stitch::X86Code*>(pe.OpenCode());
constexpr stitch::RVA fn_main = 0x00000001400015A1;
auto* fn = dynamic_cast<stitch::X86Function*>(code->EditFunction(
fn_main, ""));
fn->Instrument([&fn](zasm::x86::Assembler& as) {
for (stitch::X86Inst& inst : fn->GetOriginalCode()) {
const bool to_insert = rand() % 2;
const zasm::InstructionDetail& detail = inst.RawInst();
if (detail.getMnemonic() != zasm::x86::Mnemonic::Ret && to_insert) {
zasm::Label last_label = as.createLabel();
const auto& reg = getRandomReg();
as.setCursor(inst.GetPos());
as.pushf();
as.push(reg);
as.xor_(reg, zasm::Imm(rand()));
as.js(last_label);
as.jns(last_label);
as.bind(last_label);
as.pop(reg);
as.popf();
}
}
});
pe.SaveAs("target/pe_opaque_predicates.bin");
pe.Close();
}
```
Here's the function before the obfuscation is applied:

...and after:

#### What happened?
1. When the function address is supplied to `Code::EditFunction`, Stitch
begins parsing the code at that address, while also doing basic control
flow tracking by splitting it up into basic blocks
2. The code is parsed into a `zasm::x86::Assembler` instance and intra-function
references are replaced with labels
3. `X86Function::Instrument` allows us to modify the code, making use of
an assembler that has been populated with the original code
4. `X86Function::Finish` updates PE relocation info, assembles and writes the
code to a new section with a 16-byte alignment. The memory ranges previously
occupied by the function are patched out and a jmp to the new code is inserted
in its place.
### Todo
- Tail call detection by saving all call sites traced from entrypoint(s)
- Tail call detection may prove unreliable: some functions in heavily optimised
binaries may be recognised as having a tail call when this is not the case. This
would cause stitch to not relocate and patch out the part of the function that
follows the unconditional branch.
- reliability improvement: check tail calls by cross-referencing destinations with
all reachable call sites in the binary and their destination addresses.
- potential workaround: Allow developers to specify whether to follow all jumps
and include them as part of the function.
|
https://github.com/infinigence/Semi-PD
|
Semi-PD
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
Languages: Python (91.6%), Cuda (3.8%), C++ (2.2%), Rust (1.8%), Shell (0.4%), Makefile (0.1%), Dockerfile (0.1%)
.devcontainer
.devcontainer
.github
.github
3rdparty/amd
3rdparty/amd
assets
assets
benchmark
benchmark
...
.clang-format-ignore
.clang-format-ignore
.editorconfig
.editorconfig
.gitignore
.gitignore
.gitmodules
.gitmodules
.isort.cfg
.isort.cfg
> README.md
# Semi-PD
A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.
## Paper
If you use Semi-PD for your research, please cite our [paper](https://arxiv.org/pdf/2504.19867):
```
@misc{hong2025semipd,
title={semi-PD: Towards Efficient LLM Serving via Phase-Wise Disaggregated Computation and Unified Storage},
author={Ke Hong, Lufang Chen, Zhong Wang, Xiuhong Li, Qiuli Mao, Jianping Ma, Chao Xiong, Guanyu Wu, Buhe Han, Guohao Dai, Yun Liang, Yu Wang},
year={2025},
eprint={2504.19867},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgment
This repository originally started as a fork of the SGLang project. Semi-PD is a research prototype and does not have complete feature parity with open-source SGLang. We have only retained the most critical features and adopted the codebase for faster research iterations.
## Build && Install
```shell
# setup the semi-pd conda environment
conda create -n semi_pd -y python=3.11
conda activate semi_pd
# Use the last release branch
git clone git@github.com:infinigence/Semi-PD.git
cd Semi-PD
pip install --upgrade pip
# build IPC dependency
cd ./semi-pd-ipc/
pip install -e .
```
### For NVIDIA GPUs
```shell
# build Semi-PD
cd ..
pip install -e "python[all]" --find-links https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
```
### For AMD GPUs
```shell
cd ../sgl-kernel
python setup_rocm.py install
cd ..
pip install -e "python[all_hip]"
```
## Using docker to build base environment
You can follow the following steps to build the base environment, or build from [Dockerfile](https://github.com/infinigence/Semi-PD/tree/update_readme/docker).
### Pull the NVIDIA image
```shell
docker pull lmsysorg/sglang:v0.4.4.post1-cu124
docker run -it --gpus all -p 30000:30000 -v /your/path:/your/path --ipc=host --name semi_pd v0.4.4.post1-cu124:latest
docker exec -it semi_pd bash
```
### Pull the AMD image
```shell
docker pull lmsysorg/sglang:v0.4.4.post1-rocm630
docker run -it --device=/dev/kfd --device=/dev/dri --shm-size=32g -p 30000:30000 -v /your/path:/your/path --ipc=host --name semi_pd v0.4.4.post1-rocm630:latest
docker exec -it semi_pd bash
```
Then you can follow the `Build && Install` section to build Semi-PD.
## Launching
### Introduce
The implementation of compute isolation is based on Multi-Process Service (MPS). For NVIDIA GPUs, the MPS service must be manually enabled, whereas on AMD GPUs, it is enabled by default.
### Enable MPS (NVIDIA)
```shell
export CUDA_MPS_ENABLE_PER_CTX_DEVICE_MULTIPROCESSOR_PARTITIONING=1
nvidia-cuda-mps-control -d
```
You can disable MPS service by using this cmd:
```shell
echo quit | sudo nvidia-cuda-mps-control
```
### Run online serving
Semi-PD can be enabled using the `--enable-semi-pd` flag. Additionally, our implementation does not share activations between the prefill and decode phases, which may result in slightly higher memory usage compared to the original SGLang. If an out-of-memory issue occurs, consider reducing the value of `--mem-fraction-static` to mitigate memory pressure.
```shell
python3 -m sglang.launch_server \
--model-path $MODEL_PATH --served-model-name $MODEL_NAME \
--host 0.0.0.0 --port $SERVE_PORT --trust-remote-code --disable-radix-cache \
--enable-semi-pd --mem-fraction-static 0.85 --tp $TP_SIZE
```
## Evaluation

### To reproduce the evaluation results
Please refer to the [evaluation](./evaluation/README.md) directory.
|
https://github.com/decrazyo/nes86
|
nes86
x86 emulation on the NES
Languages: Assembly (79.2%), Python (12.2%), SourcePawn (4.3%), C++ (1.2%), Lua (1.0%), BitBake (1.0%)
conf
conf
data
data
img
img
include
include
src
src
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE
LICENSE
Makefile
Makefile
README.md
README.md
> README.md

# NES86
NES86 is an IBM PC emulator for the NES.
The goal of this project is to emulate an Intel 8086 processor and supporting PC hardware
well enough to run the
[Embeddable Linux Kernel Subset (ELKS)](https://github.com/ghaerr/elks),
including a shell and utilities.
It should be possible to run other x86 software
as long as it doesn't require more than a simple serial terminal.
[Watch the video!](https://www.youtube.com/watch?v=OooHTDMUSGY)
## How to run NES86

Download the NES ROM containing NES86 and ELKS from the [releases](https://github.com/decrazyo/nes86/releases) page.
See the [Binaries](#binaries) section for descriptions of the available binaries.
NES86 is not supported on all platforms.
The following platforms have been tested and Mesen2 or an Everdrive N8 Pro is recommended.
| platform | working? | issues |
|----------|----------|---------|
| [Mesen2](https://www.mesen.ca/) | ✅ | none |
| [BizHawk](https://tasvideos.org/BizHawk) | ✅ | overscan, quickerNES core incompatible, [dev build required](https://github.com/TASEmulators/BizHawk/actions/runs/14348701120) |
| [FCEUX](https://fceux.com/web/home.html) | ✅ | overscan, keyboard detection |
| [Rustico](https://rustico.reploid.cafe/) | ✅ | overscan, no keyboard |
| [ksNes (Animal Crossing)](https://rustico.reploid.cafe/) | ✅ | overscan, no keyboard, special ROM required, mapper hack required |
| [Nestopia](https://nestopia.sourceforge.net/) | ❌ | not enough cartridge RAM |
| [Mesen](https://www.mesen.ca/oldindex.php) | ❌ | not enough cartridge RAM |
| [Everdrive N8 Pro](https://krikzz.com/our-products/cartridges/everdrive-n8-pro-72pin.html) | ✅ | none |
| [Everdrive N8](https://krikzz.com/our-products/legacy/edn8-72pin.html) | ✅ | special ROM required, mapper hack required |
| [PowerPak](https://www.nesdev.org/wiki/PowerPak) | ❌ | many |
## Binaries
| file | description |
|------|-------------|
| nes86.nes | standard NES86 ROM |
| nes86.dbg | standard NES86 ROM debug symbols |
| nes86_ksnes.nes | NES86 ROM built for Animal Crossing |
| animal_crossing.gci | Animal Crossing save file with an NES item |
| mmc5_patch.gci | Animal Crossing MMC5 mapper patch |
| nes86.gci | NES86 ROM injected in a GameCube save file for use with Animal Crossing |
| nes86_edfc.nes | NES86 ROM built for the original Everdrive N8 |
| 005.RBF | Everdrive N8 MMC5 mapper patched to support NES86 |
## Controls
### Family BASIC Keyboard
NES86 supports the [Family BASIC Keyboard](https://www.nesdev.org/wiki/Family_BASIC_Keyboard) as an input device.
If a keyboard is detected at boot then it will be used automatically.
### On-screen Keyboard
If no keyboard is detected at boot then an on-screen keyboard will be made available.
Pressing **SELECT** will open/close the on-screen keyboard.
With the keyboard open, pressing **B** will type the selected key and **A** serves as a dedicated **Enter** key.
### Joypad Mapping
Joypad buttons are mapped directly to the following keys when the on-screen keyboard is closed.
This is done to make it easier to play `ttytetris` without a keyboard.
| joypad button | keyboard key | purpose |
|---------------|--------------|---------|
A | k | rotate clockwise
B | j | rotate counterclockwise
START | p | pause
SELECT | | open keyboard
UP | Space | hard drop
DOWN | s | soft drop
LEFT | h | move left
RIGHT | l | move right
## How to build NES86
1. Clone the project and its submodules.
`git clone --recurse-submodules https://github.com/decrazyo/nes86.git`
2. Install dependencies.
`apt install make cc65 gcc-ia16-elf`
### Build ELKS
The following steps will build an ELKS image that is compatible with NES86.
1. Enter the `elks` directory.
`cd nes86/data/elks/elks/`
2. Create a `cross` directory.
`mkdir cross`
3. Setup your environment.
`. ./env.sh`
4. Build the cross tool chain. This will take a while.
`tools/build.sh`
5. Copy or rename the provided configuration file.
`cp nes86.config .config`
6. Build ELKS.
`make all`
### Build NES86
By default, the NES86 build process will use the ELKS image that was built in the previous step.
If you would like to run some other x86 software then you'll probably need to modify
`data/Makefile`, `src/x86/rom.s`, and `conf/ld.cfg`
1. Return to the top level NES86 directory.
`cd ../../../`
2. Build NES86.
`make all`
The resulting NES ROM can be found at `nes86/bin/nes86.nes`.
## Contributing to NES86
Contributions and ports are welcome.
See
[STYLE.md](https://github.com/decrazyo/nes86/blob/main/STYLE.md)
for the project's coding style guidelines.
|
https://github.com/jkramarz/TheTick
|
TheTick
The Tick is the next evolution in covert access control system implants for simulating adversary-in-the-middle attacks.
Languages: C++ (46.8%), Jinja (18.5%), JavaScript (17.9%), C (15.7%), Python (1.1%)
.vscode
.vscode
boards
boards
data
data
docs/img
docs/img
hardware
hardware
...
.gitignore
.gitignore
.gitmodules
.gitmodules
LICENSE.fonts.md
LICENSE.fonts.md
LICENSE.hardware.txt
LICENSE.hardware.txt
LICENSE.md
LICENSE.md
> README.md
# The Tick

**The Tick** is the next evolution in covert access control system implants. Designed for a seamless integration behind card readers, The Tick silently intercepts, logs, and replays access credentials with greater efficiency and stealth than ever before. Compatible with a wide range of RFID systems, provides invaluable (to red teamers) insights into facility (in)security, while enabling advanced credential injection. Whether for security auditing, red teaming, or mobile access control testing, The Tick delivers a compact, powerful, and flexible solution in an ever-connected world.
## Comparison to other projects
| | BLEKey | ESP-RFID-Tool | ESPKey | The Tick |
| -- | -- | -- | -- | -- |
| supported protocols | Wiegand | Wiegand | Wiegand | Wiegand **+ Magstripe Clock&Data + a bit of OSDP** |
| wireless interfaces | BLE | WiFi | WiFi | **BLE + WiFi** |
| configurable D0/D1 lines | ❌ | ❌ | ❌ | ✅ |
| max power supply voltage | battery powered | 🔥 | 18V DC | **25V** DC |
| max data line voltage | 5V | 5V | 5V | **16V** |
| SoC | nRF51822 | ESP8266 | ESP8266 | **ESP32C3** |
| firmware | properly structured code | time-efficient code soup | time-efficient code soup | **slightly-organized** code soup |
| arachnophobia-safe | ✅ | ✅ | ✅ | ❓ (partially, hidden mugga-mode) |
## Hardware revisions
The device is in ongoing development - its design, made using [KiCad EDA](https://www.kicad.org/), is getting gradualy optimized and adjusted to incoming feedback and challenges experienced by both the maintainers and users.
Due to a differences in pin mapping, a correct versions must be declared in `platformio.ini`.
There're currently 2 major hardware revisions "in the wild":
### Revision 0.2

This is the current revision of the device. The rectangular purpler boards with RS485 transceiver in SOIC-8, that are easy to assemble by hand without a solder paste stencil or hot air, using common parts. The connectors footprint has been adapted for larger KYOCERA AVX 9176-000, adequate for common wire gauges. It features additional circuit for automaticly switching power sources, making the device operation more foolproof.

This batch of PCBs was generously provided by [PCBWay](https://www.pcbway.com/). Thank you Liam for reaching out with sponsorship, kind words about the project and providing nicer tone of the soldermask! From uploading a design to delivering ready-made panelized PCBs into my hands 8000 km away was about 5 days, with a weekend included.
You can [contact me](https://www.linkedin.com/in/jkramarz/) to receive one free of charge, or order them directly through [PCBWay Community](https://www.pcbway.com/project/shareproject/The_Tick_rev_0_2_52c0aa59.html) sharing platform.
[release files](https://github.com/jkramarz/TheTick/releases/tag/hardware-v0.2A)
### Revision 0.1

Initial and currently most common hardware release. The square purple boards with (hard for hand soldering) RS485 transceiver in QFN16. It does not yet feature a dedicated I2C connector and may have KYOCERA AVX 9175-000 series connectors installed, that are too small for regular PACS wiring. I'd advise a small feature backport - populating a JP2 solder jumper on the bottom side of the PCB with a Schottky diode will allow powering the reader from USB.
This is the revision [@jzdunczyk](https://www.linkedin.com/in/jzdunczyk/) used in her [Behind Closed Doors - Bypassing RFID Readers34](https://www.blackhat.com/asia-25/briefings/schedule/index.html#behind-closed-doors---bypassing-rfid-readers-44006) talk on Black Hat Asia 2025.
[release files](https://github.com/jkramarz/TheTick/releases/tag/hardware-v0.1)
### Revision 0.0

An ESP32C3, random level converter, RS485 transceiver and a bunch of wires is a good start and fully sufficient platform for testing the software features.
## Software
Firmware of this device started as a simple port of [ESPKey](https://github.com/octosavvi/ESPKey) for ESP32C3, that gradually grew into extensible and multi-protocol software project, with own improved hardware.
### Features
Currently, the firmware can be built with following features:
#### Communication interfaces
| build flag | description |
|--------------------|-------------------------------------------------------------------------------|
| USE_BLE | BLEKey-style Bluetooth Low Energy support |
| USE_WIFI | ESPKey-style WiFi (hotspot or client) support |
| USE_HTTP | HTTP user interface |
#### Firmware upgrade
| build flag | description |
|--------------------|-------------------------------------------------------------------------------|
| ~~USE_OTA~~ | ~~Arduino-style over-the-air upgrade~~ |
| USE_OTA_HTTP | HTTP endpoint for upgrading firmware |
There's an USB connector on-board, that even features embedded JTAG interface, but why not...
#### External reporting
| build flag | description |
|--------------------|---------------------------------------------------------------------------------|
| USE_MDNS_RESPONDER | broadcasts MDNS, so hostname instead of IP address can be used for a connection |
| USE_SYSLOG | reports interactions to syslog server |
| USE_LCD | reports interactions to a handy I2C OLED display |
#### Wire protocols
| build flag | description |
|-------------------------------|---------------------------------------------------------------------|
| USE_WIEGAND | provides support for Wiegand interface sniffing and transmitting |
| USE_CLOCKANDDATA | provides support for clock&data interface sniffing and transmitting |
| USE_OSDP + USE_OSDP_PD | provides support for OSDP Peripheral Device mode |
| ~~USE_OSDP + USE_OSDP_CP~~ | ~~provides support for OSDP Control Panel mode~~ |
###### In Wiegand mode,
the device can receive (sniff) and transmit messages of any length.
Assignment of D0 and D1 lines can be corrected in the configuration file after the device installation, if needed.
The device was sucessfuly tested with 5V and 12V PACS systems, that uses different card number lengths.
###### In Clock&Data mode,
he device can receive and transmit messages of any reasonable length.
Assignment of DATA and CLOCK lines can be corrected in configuration file after the device installation, if needed.
The device was sucessfuly tested with 12V Clock&Data system, running in Magstripe and UNITEK-emulation modes.
Support for Paxton decoding is based on samples provided by [en4rab](https://github.com/en4rab/sigrok-paxton-pd).
###### In OSDP Peripheral Device mode,
the device enumerates and serves as a simple OSDP PD.
Card numbers can be transmitted using HTTP and BLE interfaces for testing purposes.
### Build instructions
Open the project in [PlatformIO](https://platformio.org/) and press "Upload", then "Upload Filesystem Image". The code is Arduino-flavoured, but took too long to compile using Arduino IDE.
### HTTP interface
If built with *USE_HTTP* flag, the device provides a quite-eye candy, simple HTTP interface, based on almost-REST API and built using jQuery and Bootstrap.

Currently, it offers following features:
* review and modification of application configuration files,
* review of sniffed reader interactions,
* replay of acquired credentials,
* sending arbitrary card numbers (raw or encoded in common formats)
### BLE interface
If built with *USE_BLE* flag, the device exposes a custom Bluetooth Low Energy interface:

Currently, it offers following features:
* reading the last sniffed card,
* notifying about new interactions,
* sending arbitrary card number.
Currently, by default, device requires bonding with pre-configured passkey and use of secure connections.
Feature-wise it is simillar to [BLEKey](https://github.com/linklayer/BLEKey) by Mark Baseggio and Eric Evenchick, but running on a decade-younger hardware.
By default, functions are exposed in service *f498124f-2137-4615-9859-30eb4cecffb5* as characteristic *beb5483e-36e1-4688-b7f5-ea07361baaaa*. These UUIDs can be modified in the device configuration.
There is planned a Flipper Zero client, that will be publicly released shortly after BLE Central role will be incorporated in its firmware (probably never).
### OTA upgrade
By properly configuring the build flags, the firmware can feature OTA-upgrade. BLE may need to be sacrificed to fit two copies of firmware in device flash.
It is possible to use Arduino-style OTA (but I never did) or upload firmware images over HTTP endpoint, depending on the build configuration.
## Configuration reset
1. You need to get the timing Just Right™,
2. Start watching [new emergency number spot from IT Crowd](https://youtu.be/HWc3WY3fuZU),
3. When ambulance appears on the screen, connect The Tick to power source (e.g. USB) or press RST button,
4. When each of digits "881 999" appears on the screen, briefly press "BOOT" button,
5. Wait few seconds - the device will start with empty configuration and log files and expose WiFI hotspot.
## Hardware
### Hand soldering procedure
When hand-soldering without the paste stencil and a hot-plate, to limit accidential damage:
* start with assembling DC-DC converter,
* verify if it works correctly,
* close apropriate solder bridges,
* populate Wiegand level converter,
* populate ESP32-C3 module,
* program the device,
* check if correct voltage levels are visible in HTTP interface,
* proceed with populating RS485 transceiver,
* verify that Wiegand still works,
* finish with the connectors.
It also fits the cheapest G3061 hot-plate you can get and solders nicely with hand-applied solder paste.
### ESP32-C3
The device utilizes ESP32-C3FH4 on a ready-made TENSTAR ESP32-C3 SuperMini Plus module.
Some of the non-Plus modules comes with incorrect antenna design, resulting in [impresively poor WiFi range](https://www.reddit.com/r/esp32/comments/1dsh3b5/warning_some_c3_super_mini_boards_have_a_design/).
For a better range, implementing an [antenna mod](https://peterneufeld.wordpress.com/2025/03/04/esp32-c3-supermini-antenna-modification/) may be a good option.

### DC-DC converter

As no linear voltage regulator in used, both power consumption and heat dissipation is minimal.
The device is also protected against reverse polarity - it just doesn't start, but doesn't blow up.
The power supply can use two pin-compatible PMICs:
Schematics of revisions 0.1 and 0.2A:
* Uses [LMR51430](https://www.ti.com/product/LMR51430) synchronous buck step-down converter, rated at up to 36V.
* It runs at 1.1 MHz switching frequency, and uses relatively small 2.2uH inductor.
* Is probably a bit low-noise and more power-efficient.
Schematics for revision 0.2B:
* Uses [TPS54202](https://www.ti.com/product/TPS54202) synchronous buck step-down converter, rated at up to 28V.
* It runs at 500 kHz switching frequency, and uses common 10uH inductor.
* Is definitely more cost-effective.
Both designs were verfied to work just fine, so I'd recommend assembling using whatever is available.
Maximum voltage if further limited by voltage rating of installed capacitor and polyfuse, but components from the BOM are sufficient to safely run it with 24V-powered long-range readers.
### Battery operation
The device DC-DC converter is configured to turn off at approximately at 6V and start at 6.4V, to provide an overdischarge protection if the device is operated from 2S lithium-ion polymer battery pack. The additional load (e.g. the connected reader) is not covered by this protection.
Battery voltage can be measured using device ADC and read in device web interface.
3-pin connector present on the board follows the same pinout, as balancer plugs presents on the battery packs.
### Level shifter

Absolute maximum drain-source voltage rating for 2N7002 N-channel MOSFET transistor is 60V.
The board is definitelty not designed for such volatage and I advise against connecting it to live installations of such voltage levels, but it is much more than 5-12V usually found in access control systems anyway.
Voltage shifter is derived from "Bi-directional level shifter for I²C-bus and other systems" design described in Philips Semiconductors application note [AN97055](https://cdn-shop.adafruit.com/datasheets/an97055.pdf).
This solution:
* works properly in installations with external pull-up (e.g. provided by the reader sharing the same line) regardless of the voltage levels,
* provides a convenient way of pulling it down to GND,
* does not provide a way of pulling the line up to VCC.
### RS-485 transceiver

The device design incorporates [THVD1410](https://www.ti.com/product/THVD1410) / [THVD2410](https://www.ti.com/product/THVD2410) transceiver intended for interacting with OSDP systems.
Populating the transceiver on the PCB limits maximum safe communication lines voltage to 18V / 70V respectivly.
It is configured in half-duplex, low-speed mode. Bus terminator can be populated to the PCB, but is usually not required for proper operation.
In non-OSDP modes, the device firmware configures the transceiver into high impedance mode to avoid interferrence.
### LCD support
The device supports connecting SSD1306-based 128X32 OLED to visualize reader interactions.
Two Wire Interface Bus is available on a dedicated LCD connector.
### Solder bridges
The PCB revision 0.2A features 4 solder bridges for configuring power routing:
* when operating a board with fully populated DC-DC converter, solder bridges JP2 and JP7 should be closed,
* when using a simplified version, JP3 must be closed to connect grounds and JP1 may be closed to provide 5V to the connected reader.
### Connectors

The current PCB revision uses KYOCERA AVX insulation displacement connectors of 9176-000 series that support wires up to 20 AWG. This configuration is inteded for field-use.
If IDC connectors are not needed or there're special requirement for adapting device for thicker wires, footprint incorporates holes for connecting a wire:
* For firmly connecting devices on a desk, I personaly use a short 22 AWG silicon cable with [WAGO 221 series splicing connectors](https://www.wago.com/gb/installation-terminal-blocks-and-connectors/splicing-connector-with-levers/p/221-613).
* For easily connecting to non-standard cables, I'd recommended to use "automotive" IDC T2 [tap connectors](https://aliexpress.com/i/1005005768342063.html).
## Contributing
If you want to contribute to a project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
I recognize that contributing to hardware projects can be more challenging than software, especially without access to the necessary components. If you're interested in helping out but lack the hardware, drop me an email — I may be able to send you a PCB to get started.
## License
### Software License
The software for "The Tick" is licensed under the GNU General Public License (GPL) v3.0. This license allows you to freely use, modify, and distribute the software, provided that any distribution or derivative works are also licensed under the GPL.
For more details on the GNU GPL v3.0, please refer to the [GNU General Public License v3.0](LICENSE.md).
### UI Libraries And Template License
The user interface of "The Tick" utilizes jQuery and Bootstrap, both of which are licensed under the MIT License. This permissive license allows you to freely use, modify, and distribute the code, with minimal restrictions.
For more details on the MIT License, please refer to the [MIT License](LICENSE.template.md).
### Hardware License
The hardware design for "The Tick" is licensed under the CERN Open Hardware Licence Version 2 - Strongly Reciprocal (CERN-OHL-S v2). This license permits the use, distribution, and modification of the hardware design, with the condition that any derived works must also be licensed under the same terms.
For more details on the CERN-OHL-S v2, please refer to the [CERN Open Hardware Licence Version 2 - Strongly Reciprocal](LICENSE.hardware.txt).
|
https://github.com/Wack0/entii-for-workcubes
|
entii-for-workcubes
PowerPC Windows NT ported to Nintendo GameCube/Wii/Wii U
Languages: C (94.0%), C++ (5.3%), Assembly (0.3%), Makefile (0.3%), BitBake (0.1%), Linker Script (0.0%)
arcfw
arcfw
arcldr
arcldr
cafegx2drv
cafegx2drv
fpexiblkdrv
fpexiblkdrv
fpgx35dll
fpgx35dll
...
COPYING
COPYING
README.md
README.md
pe_rules
pe_rules
> README.md
[](https://discord.gg/S8jWsc4SAq)
# Windows NT for GameCube/Wii
The following systems are supported:
* Nintendo GameCube
* Nintendo Wii
* Wii Mini requires SD card hardmod (for now)
* Nintendo Wii U (**vWii only for now**)
The following systems are theoretically supported, although not tested due to the rarity of such hardware:
* Broadway Evaluation Board
* Cortado boards
The following systems will NEVER be supported:
* early Dolphin Development Hardware with only 4MB of usable RAM
## Drivers present
* Flipper interrupt controller (in HAL)
* Flipper Video Interface console framebuffer (YUV XFB) for ARC firmware and HAL
* Flipper GPU RGB framebuffer under NT (writing to EFB under text setup, writing to texture under GDI; copying out to XFB on vblank interrupt)
* Flipper Serial Interface (gamecube controller ports), supporting the following devices:
* GameCube ASCII keyboard controller, plus unreleased English/European variants (discovered through reversing Phantasy Star Online); the latter are completely untested, the former has not been tested on real hardware
* GameCube controller, with the following mappings:
* Under ARC firmware: left analog stick and d-pad maps to up/down, A button maps to enter, B button maps to escape, X button maps to letter 'S'
* Under NT text setup: left analog stick and d-pad maps to up/down, c-stick maps to page up/page down, A button maps to enter, B button maps to escape, X button maps to F8, Y button maps to letter 'C', Z button maps to letter 'L'
* Under NT GDI: left analog stick moves mouse, A button maps to left mouse button, B button maps to right mouse button, L+R together maps to ctrl+alt+del, c-stick allows for choosing a keyboard scancode (1-9, 0, a-z), X button confirms the selected scancode. Numbers are first in the list so numeric-only text boxes (like entering CD key) still works.
* N64 Randnet keyboard, completely untested so may have issues
* N64 mouse (under NT only), completely untested so may have issues
* N64 controller (completely untested so may have issues), with the following mappings:
* Under ARC firmware: left analog stick and d-pad maps to up/down, A button maps to enter, B button maps to escape, Z button maps to letter 'S'
* Under NT text setup: left analog stick and d-pad maps to up/down, c-stick maps to page up/page down, A button maps to enter, B button maps to escape, Z button maps to F8, L trigger maps to letter 'C', R trigger maps to letter 'L'
* Under NT GDI: left analog stick moves mouse, A button maps to left mouse button, B button maps to right mouse button, L+R together maps to ctrl+alt+del, c-down and c-up allows for choosing a keyboard scancode (1-9, 0, a-z), start button confirms the selected scancode. Numbers are first in the list so numeric-only text boxes (like entering CD key) still works.
* Flipper External Interface (SPI bus), supporting the following devices:
* RTC
* USB Gecko (for kernel debugger only)
* SD Gecko or compatible
* IDE-EXI or compatible (has not been tested on real hardware)
* Vegas IOP IPC
* Vegas SDMC controller (via IOS)
* Vegas USB (OHCI/EHCI) controllers (via IOS), supporting the following devices:
* USB keyboard
* USB mouse
* USB mass storage (currently has some issues, some devices may not work)
* **Hotplugging USB devices is not supported. To use a USB device, it must be plugged in before launching the ARC firmware.**
## Software compatibility
NT 3.51 RTM and higher. NT 3.51 betas (build 944 and below) will need kernel patches to run due to processor detection bugs. NT 3.5 will never be compatible, as it only supports PowerPC 601.
(The additional suspend/hibernation features in NT 3.51 PMZ could be made compatible in theory but in practise would require all of the additional drivers for that to be reimplemented.)
## Installing
### Preliminary
* Grab binaries from the release page, extract to SD card (or EXI-IDE device)
* Copy an NT 3.51 or 4.0 ISO to `sd:\nt\disk00.iso`
* Create a raw disk image of the size you want at `sd:\nt\disk00.img` - I use `qemu-img create disk00.img 2G`, change the size as appropriate. Remember that the maximum file size on a FAT32 partition is 4GB.
* On a GameCube, load `arcldr_dol.dol` from Swiss; on Wii/vWii, load `arcldr` from the Homebrew Channel.
### Partitioning Disk
* When you get to ARC firmware menu, go to `Run firmware setup`, then `Repartition disk or disk image for NT installation`.
* Select the disk image you created earlier.
* Confirm the partition operation with Y (on keyboard), X button (on GameCube controller), or Z button (on N64 controller)
* When finished, the partitioner will ask to `Press any key to restart`. This should either restart your system or return to loader where you can load `arcldr` again.
### Installing NT
* Choose `Run NT setup from cd00`.
* You will receive the message `Setup could not determine the type of computer you have`.
* Choose `Other` (default selected option), just press `Enter` (or A button) when asked for hardware support disk.
* Choose the HAL from the list, currently there is only one option: `Nintendo GameCube, Wii and Wii U (vWii)`.
* Next you will receive the message `Setup could not determine the type of one or more mass storage drivers installed in your system`. At least two drivers need to be loaded at this point.
* To load a driver, press `S` (X button on GameCube controller, Z button on N64 controller) to pick a driver, choose `Other` from the list, press `Enter` (A button) when asked for hardware support disk, and choose the driver.
* `Nintendo Wii SD Slot (via IOS) [Disk Images]` is required when using the front SD card slot on a Wii or Wii U
* `Nintendo Wii USB (via IOS)` is required when using any USB device (keyboard, mouse or mass storage) on a Wii or Wii U
* `Nintendo GameCube Controller Ports` is required when using devices plugged into the GameCube controller ports on a GameCube or Wii
* `SD Gecko or IDE-EXI and Compatible [Disk Images]` is required when using SD Gecko (or compatible) or IDE-EXI (or compatible) devices in the GameCube memory card slots on a GameCube or Wii, or the serial ports present underneath a GameCube
* To make this simpler: on a GameCube you will need only the last two; on a Wii U vWii you will only need the first two, and on a Wii you will need the first two and possibly the last two depending on if you are using/want to use the GameCube controller ports/memory card slots or not.
* You will receive the message `Setup could not determine the type of video adapter installed in the system`. Choose `Other` from the list, press `Enter` when asked for hardware support disk, and choose the correct option depending on the OS you are installing.
* There are two options in this list; `ArtX Flipper, ATI Vegas, AMD Bollywood (NT 4)` is for NT 4, `ArtX Flipper, ATI Vegas, AMD Bollywood (NT 3.x)` is for NT 3.51.
* NT will boot and text setup will start. Go through the text setup.
* Under `Setup has determined that your computer contains the following hardware and software components`, change `Keyboard` from `Unknown` to `XT, AT or Enhanced Keyboard (83-104 keys)` and `Pointing Device` from `Unknown` to `No Mouse or Other Pointing Device`.
* Choose the `C:` drive from the partition list. If you chose to create an NT partition of size 2GB or less, it must be formatted.
* If you chose to create an NT partition of over 2GB in size, errors will be found by the disk examination process which will require a reboot. You will need to boot back into the ARC firmware from Swiss or the Homebrew Channel and follow the "Installing NT" steps again to get back to this point.
* On the second attempt, disk examination will succeed, so just choose the `C:` partition again in the NT text setup partition selector.
* Proceed through the rest of NT text and graphical setup as normal.
## Known issues
* System may hang on reboot sometimes.
* There are issues with some USB mass storage devices.
* GDI driver uses slow unoptimised code for copying from GDI bitmap buffer to GPU texture buffer.
* ARC firmware and NT drivers support exFAT for disk images on an SD card/EXI-IDE device, but the loader currently does not support exFAT for loading the ARC firmware proper.
* The loader currently does not support loading the ARC firmware from a USB mass storage device.
* Be aware that the EXI bus is slower compared to other disk interfaces, so using SD Gecko/EXI-IDE causes slowdowns. This is most notable when installing NT on GameCube where this is the only available option.
## Building ARC firmware
You need devkitPPC. Additionally, a `libgcc.a` compiled for `powerpcle` must be present in `arcfw/gccle`. If you need to find one, it should be present on any Void Linux mirror, the current filename to search for as of 2024-07-12 is `cross-powerpcle-linux-gnu-0.34_1.x86_64.xbps` - decompress it by `zstdcat cross-powerpcle-linux-gnu-0.34_1.x86_64.xbps -o cross-powerpcle-linux-gnu-0.34_1.x86_64.tar`, then pull the file out of the tarball: `usr/lib/gcc/powerpcle-linux-gnu/10.2/libgcc.a`.
* Ensure `DEVKITPPC` environment variable is set to your devkitPPC directory, usually `/opt/devkitpro/devkitPPC`
* Build the ARC firmware loader: `cd arcldr ; make -f Makefile.wii ; make -f Makefile.gc ; cd ..`
* Build the little endian libc: `cd arcfw/baselibc ; make ; cd ../..`
* Build the ARC firmware itself: `cd arcfw; make ; cd ..`
## Building HAL/drivers
You need [peppc](https://github.com/Wack0/peppc). Additionally, the powerpc libs from the [NT4 DDK](https://archive.org/details/94396011997WindowsNTDDKForWinNT4.0WorkstationUS.iso.7z) (`ddk/lib/ppc/free/*.lib`) must be present in `lib`. The rest of the toolchain (VC6 PPC CE cross compiler used for the C preprocessor for asm, as multi-line defines are handled improperly by gcc cpp; assembler PASM.EXE with single branch patched to skip "dump statements"; resource compiler and linker from MSVC 4.2, and its dependencies; `SPLITSYM.EXE` from NT 3.51 DDK to split COFF debug symbols from executables) is present in `msvc-ppc` directory.
To build the NT 3.5x GDI driver `fpgx35dll` you also need the powerpc `winsrv.lib` from NT 3.51 DDK.
The headers are included and come from various places with slight modifications for working with this toolchain, or for backwards compatibility reasons:
* `nt4/sdk` - NT4 SDK
* `nt4/ddk` - NT4 DDK (including all the headers from the `src/*/inc` directories)
* `nt4/crt` - VC++ 4.0 (CRT headers)
* `nt4/hal` - because of a lack of a public dump, this folder includes the headers that have evidence suggesting they were included in the NT4 halkit (minus `nthal.h` which is in the hal source folder, and was modified to allow for backwards compatibility). Some have been modified to allow them to be included by drivers after `ntddk.h` (so drivers can call `HalDisplayString` for debugging purposes, or use `LOADER_PARAMETER_BLOCK` to determine whether they are running in text setup or not).
The makefiles used are derived from devkitPro.
Ensure `PEPPC` environment variable is set to the `peppc-build/toolchain/bin` directory.
You must build the hal first (`cd halartx; make; cd ..`) before you can build the other drivers, as the HAL implements the exported IOP IPC and EXI drivers (due to the HAL itself using them).
## Acknowledgements
* libc used is [baselibc](https://github.com/PetteriAimonen/Baselibc)
* ELF loader, arcfw makefile (and some cache invalidation functions) adapted from [The Homebrew Channel](https://github.com/fail0verflow/hbc)
* Other makefiles adapted from [devkitPro](https://github.com/devkitPro/devkitppc-rules)
* Some lowlevel powerpc stuff, and ARC firmware framebuffer console implementation and font, adapted from [libogc](https://github.com/devkitPro/libogc)
* EXI-IDE driver in ARC loader adapted from [Swiss](https://github.com/emukidid/swiss-gc/blob/master/cube/swiss/source/devices/fat/ata.c)
* IOS IPC driver in ARC firmware adapted from [The Homebrew Channel's reload stub](https://github.com/fail0verflow/hbc/blob/master/channel/channelapp/stub/ios.c)
* ISO9660 FS implementation inside ARC firmware is [lib9660](https://github.com/erincandescent/lib9660) with some modifications.
* FAT FS implementation inside ARC firmware is [Petit FatFs](http://elm-chan.org/fsw/ff/00index_p.html) with some modifications; additionally the full [FatFs](http://elm-chan.org/fsw/ff/) is used for reading the underlying disk images on FAT16/FAT32/exFAT partitions (in ARC and inside iossdmc.sys and fpexiblk.sys)
* GDI driver derived from NT4 DDK example `framebuf`.
* Various drivers adapted from those in libogc.
|
https://github.com/deepseek-ai/DualPipe
|
DualPipe
A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.
Languages: Python (100.0%)
dualpipe
dualpipe
examples
examples
images
images
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
setup.py
setup.py
> README.md
# DualPipe
DualPipe is an innovative bidirectional pipeline parallelism algorithm introduced in the [DeepSeek-V3 Technical Report](https://arxiv.org/pdf/2412.19437). It achieves full overlap of forward and backward computation-communication phases, also reducing pipeline bubbles. For detailed information on computation-communication overlap, please refer to the [profile data](https://github.com/deepseek-ai/profile-data).
### Schedules

Example DualPipe scheduling for 8 PP ranks and 20 micro-batches in two directions.
The micro-batches in the reverse direction are symmetric to those in the forward direction, so
we omit their batch ID for illustration simplicity. Two cells enclosed by a shared black border
have mutually overlapped computation and communication
## DualPipeV
DualPipeV is a concise V-shape schedule derived from DualPipe using a "cut-in-half" procedure, introduced by Sea AI Lab as "Cut-in-half" in their [blog post](https://hackmd.io/@ufotalent/r1lVXsa9Jg). Thanks to them for this efficient schedule!
### Schedules

Example DualPipeV scheduling for 4 PP ranks (8 PP stages) and 10 micro-batches.
## Pipeline Bubbles and Memory Usage Comparison (based on the same number of PP stages)
| Method | Bubble | Parameter Per Device | Activation Per Device | #Devices |
|-------------|---------------------------------|----------------------|-----------------------|----------|
| 1F1B | (*PP*-1)(𝐹+𝐵) | 1× | *PP* | *PP* |
| ZB1P | (*PP*-1)(𝐹+𝐵-2𝑊) | 1× | *PP* | *PP* |
| DualPipe | (*PP*/2-1)(𝐹&𝐵+𝐵-3𝑊) | 2× | *PP*+1 | *PP* |
| DualPipeV | (*PP*/2-1)(𝐹&𝐵+𝐵-3𝑊) | 2× | *PP*+1 | *PP*/2 |
*PP* denotes the number of pp stages (even).
𝐹 denotes the execution time of a forward chunk, 𝐵 denotes the execution time of a
full backward chunk, 𝑊 denotes the execution time of a "backward for weights" chunk, and 𝐹&𝐵
denotes the execution time of two mutually overlapped forward and backward chunks.
## Quick Start
The usage is shown in the following example:
```bash
python examples/example_dualpipe.py
python examples/example_dualpipev.py
```
Note: For real-world applications, you will need to implement a custom `overlapped_forward_backward` method tailored to your specific module.
## Requirements
- PyTorch 2.0 and above
## Developers
DualPipe was created and developed by Jiashi Li and Chengqi Deng and Wenfeng Liang.
## Citation
```bibtex
@misc{deepseekai2025deepseekv3technicalreport,
title={DeepSeek-V3 Technical Report},
author={DeepSeek-AI},
year={2025},
eprint={2412.19437},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.19437},
}
```
|
https://github.com/deepseek-ai/3FS
|
3FS
A high-performance distributed file system designed to address the challenges of AI training and inference workloads.
Languages: C++ (86.9%), Rust (4.4%), OpenEdge ABL (3.4%), Python (2.1%), C (1.7%), CMake (0.8%)
.cargo
.cargo
.github/workflows
.github/workflows
benchmarks
benchmarks
cmake
cmake
configs
configs
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.clangd
.clangd
.dockerignore
.dockerignore
.gitignore
.gitignore
> README.md
# Fire-Flyer File System
[](https://github.com/deepseek-ai/3fs/actions/workflows/build.yml)
[](LICENSE)
The Fire-Flyer File System (3FS) is a high-performance distributed file system designed to address the challenges of AI training and inference workloads. It leverages modern SSDs and RDMA networks to provide a shared storage layer that simplifies development of distributed applications. Key features and benefits of 3FS include:
- Performance and Usability
- **Disaggregated Architecture** Combines the throughput of thousands of SSDs and the network bandwidth of hundreds of storage nodes, enabling applications to access storage resource in a locality-oblivious manner.
- **Strong Consistency** Implements Chain Replication with Apportioned Queries (CRAQ) for strong consistency, making application code simple and easy to reason about.
- **File Interfaces** Develops stateless metadata services backed by a transactional key-value store (e.g., FoundationDB). The file interface is well known and used everywhere. There is no need to learn a new storage API.
- Diverse Workloads
- **Data Preparation** Organizes outputs of data analytics pipelines into hierarchical directory structures and manages a large volume of intermediate outputs efficiently.
- **Dataloaders** Eliminates the need for prefetching or shuffling datasets by enabling random access to training samples across compute nodes.
- **Checkpointing** Supports high-throughput parallel checkpointing for large-scale training.
- **KVCache for Inference** Provides a cost-effective alternative to DRAM-based caching, offering high throughput and significantly larger capacity.
## Documentation
* [Design Notes](docs/design_notes.md)
* [Setup Guide](deploy/README.md)
* [USRBIO API Reference](src/lib/api/UsrbIo.md)
* [P Specifications](./specs/README.md)
## Performance
### 1. Peak throughput
The following figure demonstrates the throughput of read stress test on a large 3FS cluster. This cluster consists of 180 storage nodes, each equipped with 2×200Gbps InfiniBand NICs and sixteen 14TiB NVMe SSDs. Approximately 500+ client nodes were used for the read stress test, with each client node configured with 1x200Gbps InfiniBand NIC. The final aggregate read throughput reached approximately 6.6 TiB/s with background traffic from training jobs.

To benchmark 3FS, please use our [fio engine for USRBIO](benchmarks/fio_usrbio/README.md).
### 2. GraySort
We evaluated [smallpond](https://github.com/deepseek-ai/smallpond) using the GraySort benchmark, which measures sort performance on large-scale datasets. Our implementation adopts a two-phase approach: (1) partitioning data via shuffle using the prefix bits of keys, and (2) in-partition sorting. Both phases read/write data from/to 3FS.
The test cluster comprised 25 storage nodes (2 NUMA domains/node, 1 storage service/NUMA, 2×400Gbps NICs/node) and 50 compute nodes (2 NUMA domains, 192 physical cores, 2.2 TiB RAM, and 1×200 Gbps NIC/node). Sorting 110.5 TiB of data across 8,192 partitions completed in 30 minutes and 14 seconds, achieving an average throughput of *3.66 TiB/min*.


### 3. KVCache
KVCache is a technique used to optimize the LLM inference process. It avoids redundant computations by caching the key and value vectors of previous tokens in the decoder layers.
The top figure demonstrates the read throughput of all KVCache clients (1×400Gbps NIC/node), highlighting both peak and average values, with peak throughput reaching up to 40 GiB/s. The bottom figure presents the IOPS of removing ops from garbage collection (GC) during the same time period.


## Check out source code
Clone 3FS repository from GitHub:
git clone https://github.com/deepseek-ai/3fs
When `deepseek-ai/3fs` has been cloned to a local file system, run the
following commands to check out the submodules:
```bash
cd 3fs
git submodule update --init --recursive
./patches/apply.sh
```
## Install dependencies
Install dependencies:
```bash
# for Ubuntu 20.04.
apt install cmake libuv1-dev liblz4-dev liblzma-dev libdouble-conversion-dev libdwarf-dev libunwind-dev \
libaio-dev libgflags-dev libgoogle-glog-dev libgtest-dev libgmock-dev clang-format-14 clang-14 clang-tidy-14 lld-14 \
libgoogle-perftools-dev google-perftools libssl-dev libclang-rt-14-dev gcc-10 g++-10 libboost1.71-all-dev build-essential
# for Ubuntu 22.04.
apt install cmake libuv1-dev liblz4-dev liblzma-dev libdouble-conversion-dev libdwarf-dev libunwind-dev \
libaio-dev libgflags-dev libgoogle-glog-dev libgtest-dev libgmock-dev clang-format-14 clang-14 clang-tidy-14 lld-14 \
libgoogle-perftools-dev google-perftools libssl-dev gcc-12 g++-12 libboost-all-dev build-essential
# for openEuler 2403sp1
yum install cmake libuv-devel lz4-devel xz-devel double-conversion-devel libdwarf-devel libunwind-devel \
libaio-devel gflags-devel glog-devel gtest-devel gmock-devel clang-tools-extra clang lld \
gperftools-devel gperftools openssl-devel gcc gcc-c++ boost-devel
# for OpenCloudOS 9 and TencentOS 4
dnf install epol-release wget git meson cmake perl lld gcc gcc-c++ autoconf lz4 lz4-devel xz xz-devel \
double-conversion-devel libdwarf-devel libunwind-devel libaio-devel gflags-devel glog-devel \
libuv-devel gmock-devel gperftools gperftools-devel openssl-devel boost-static boost-devel mono-devel \
libevent-devel libibverbs-devel numactl-devel python3-devel
```
Install other build prerequisites:
- [`libfuse`](https://github.com/libfuse/libfuse/releases/tag/fuse-3.16.1) 3.16.1 or newer version
- [FoundationDB](https://apple.github.io/foundationdb/getting-started-linux.html) 7.1 or newer version
- [Rust](https://www.rust-lang.org/tools/install) toolchain: minimal 1.75.0, recommended 1.85.0 or newer version (latest stable version)
## Build 3FS
- Build 3FS in `build` folder:
```
cmake -S . -B build -DCMAKE_CXX_COMPILER=clang++-14 -DCMAKE_C_COMPILER=clang-14 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_EXPORT_COMPILE_COMMANDS=ON
cmake --build build -j 32
```
- Build 3FS use Docker
- For TencentOS-4: `docker pull docker.io/tencentos/tencentos4-deepseek3fs-build:latest`
- For OpenCloudOS-9: `docker pull docker.io/opencloudos/opencloudos9-deepseek3fs-build:latest`
## Run a test cluster
Follow instructions in [setup guide](deploy/README.md) to run a test cluster.
## Report Issues
Please visit https://github.com/deepseek-ai/3fs/issues to report issues.
|
https://github.com/seal-rg/recurrent-pretraining
|
recurrent-pretraining
Pretraining and inference code for a large-scale depth-recurrent language model
Languages: Python (100.0%)
evaluate_raven
evaluate_raven
examples
examples
launch_configs
launch_configs
recpre
recpre
scripts
scripts
...
.gitignore
.gitignore
LICENSE
LICENSE
README.md
README.md
finetuning_simple_example.py
finetuning_simple_example.py
launch_frontier.py
launch_frontier.py
> README.md
# Code for Pretraining and Inference of Huginn-0125 - a Depth-Recurrent Model
This repo contains the code we used to train a recurrent-depth model at scale on 4096 AMD GPUs on Frontier. All details on this model can be found in the tech report: "Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach" (https://www.arxiv.org/abs/2502.05171). The final model is `huginn-0125`, which can be found here: https://huggingface.co/tomg-group-umd/huginn-0125.
Over time, we have also accumulated a good amount of code used for testing and inference, especially the HF-compatible modeling file `recpre/raven_modeling_minimal.py` and inference scripts in `evaluate_raven`. Benchmarking through `lm-eval` is supported, see more infos below.
This repo was originally based on a fork of https://github.com/Lightning-AI/litgpt, which was very helpful to bootstrap our efforts, but little `litgpt` code remains at this stage. Code in this repository was written by Jonas Geiping, John Kirchenbauer, Sean McLeish, Khalid Saifullah, Manli Shu, Neel Jain, Siddarth Singh, Abhimanyu Hans, Monte Hoover and Prajwal Singhanaia.
This repo also contains all code we used to prepare the model's tokenizer and data, mostly in `scripts/`.
I (Jonas) do not necessarily think that you should pretrain your own model with this implementation, but I hope it serves as a useful reference for the exact choices we took to run this model (at all), and how we ran this model given the limitations of AMD systems. **If you are working with either of these, feel free to always raise an issue asking for more details.**
## Code Setup:
* The actual model definition is in `recpre/model_dynamic.py`.
* The training is orchestrated from `train.py`.
* Model shapes can be found in `recpre/model_registry.py`. The final model is the shape `nebel-raven-3.5b`
* The configurations for our two large-scale runs are in `launch_configs/`.
* The environment flags can be read out of `launch_frontier.py`.
* The parallelism implementation is deep down in `recpre/utils.py`, in a class called `SimpleFabric`. `_allreduce_chunk_stream` was used for inter-node communication, which was the only solution to remedy RCCL hangs at scale when using the OFI plugin, at the time of writing.
The code to run the model at inference is probablier easier to look at, if you just want to see the model architecture.
It can be found on all Huggingface repos of this model, and at `recpre/raven_modeling_minimal.py`.
## Reproducing Benchmark Scores
All benchmark scores reported in the paper are computed using the lm-eval harness, except for the code tasks, which are executed using bigcode. For default benchmarks, you can run `lm-eval` like so (no installation necessary):
```
lm_eval --model hf --model_args pretrained=tomg-group-umd/huginn-0125,trust_remote_code=True,dtype=bfloat16,mean_recurrence=32 --tasks hellaswag --batch_size=auto --num_fewshot=0
```
For GSM8k, "w/ sys. prompt" refers to the following invocation, using this system prompt, and chat formatting:
```
lm_eval --model hf \
--model_args pretrained=tomg-group-umd/huginn-0125,trust_remote_code=True,dtype=bfloat16,mean_recurrence=32 \
--tasks gsm8k_cot --batch_size=auto --apply_chat_template=True --fewshot_as_multiturn \
--system_instruction="You are a helpful assistant that can assist users with mathematical reasoning." \
```
To reproduce humaneval scores, you nowadays do not need to install bigcode-eval directly, but you can also use the lm-eval harness, like so
```
HF_ALLOW_CODE_EVAL=1 accelerate launch -m lm_eval \
--model hf --model_args pretrained=tomg-group-umd/huginn-0125,mean_recurrence=32,trust_remote_code=True,dtype=bfloat16 \
--tasks humaneval_instruct --batch_size=1 --num_fewshot=0 \
--output_path=outputs/heval --confirm_run_unsafe_code \
--apply_chat_template=True \
--gen_kwargs=do_sample=True,temperature=0.2,top_p=0.95
```
## Fast Inference
Fast inference through vllm is now supported. See the `vllm` folder for more details. The plugin for this model can be installed into any recent vllm v1 version.
## Data
We have uploaded the entire training dataset to Hugging Face, you can find it here: https://huggingface.co/datasets/tomg-group-umd/huginn-dataset.
This upload contains 4096 parquet files (for training and validation each), exactly corresponding to the 4096 AMD GPUs used to train the model data-parallel. Each row contains 4096+1 tokens, so one training step (we train with a local microbatch size of 1) corresponds to exactly one row from each file.
## The grim details
What steps would you have to take if you were to replicate this model training and data collection run on an AMD cluster? Follow this outline (and please ask for more details when you're stuck).
1. Use `scripts/tokenizer_generation.py` to generate the tokenizer. Before you run the script, adapt all paths for your system. Data download is automatic. You also need the BPE trainer from https://github.com/gautierdag/bpeasy.
2. Run `scripts/scalable_data_download.py` to download all raw datasets. The name of the script is a lie, this is not so scalable, it will take a long time, lots of space and fail due to random errors. You'll also notice that there a number of extra rules hardcoded in the script for various badly formatted datasets. By the time you run this script, some of these authors may have updated their dataset breaking assumptions set here. You would get an error in that case, and would need to investigate that particular dataset. After this step, you'd have all raw datasets in a `staging` folder.
3. Run the `scripts/parquet_to_parquet_tokenizer.py` to generate the tokenized dataset. Again, remember to set your paths correctly.
4. After tokenizing, run the `scripts/parquet_to_parquet_shuffler.py` to shuffle the data.
5. Define your own launch config in `launch_configs/` or use our config, and launch `train.py` onto your cluster. We launched onto frontier with the launcher file called `launch_frontier.py`, but this will not help you on a different cluster. Follow your cluster's best practices and environment flag guidelines when setting up a large-scale run. The core command is just `python train.py --config=launch_configs/your_config.yaml`.
6. Watch it train (hopefully). You can add additional segments to the training run as needed.
## License
This code is released under an [apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. Part of the code is licensed under the Lightning AI Apache-2.0 license.
## Citation
```
@article{geiping_scaling_2025,
title = {Scaling up {{Test-Time Compute}} with {{Latent Reasoning}}: {{A Recurrent Depth Approach}}},
shorttitle = {Scaling up {{Test-Time Compute}} with {{Latent Reasoning}}},
author = {Geiping, Jonas and McLeish, Sean and Jain, Neel and Kirchenbauer, John and Singh, Siddharth and Bartoldson, Brian R. and Kailkhura, Bhavya and Bhatele, Abhinav and Goldstein, Tom},
year = {2025},
month = feb,
eprint = {2502.05171},
primaryclass = {cs},
publisher = {arXiv},
doi = {10.48550/arXiv.2502.05171},
url = {http://arxiv.org/abs/2502.05171},
urldate = {2025-02-10},
archiveprefix = {arXiv},
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning},
journal = {arxiv:2502.05171[cs]}
}
```
## Contact
Please, feel free to contact us with any questions, or open an issue!
|
https://github.com/T3nb3w/ComDotNetExploit
|
ComDotNetExploit
A C++ proof of concept demonstrating the exploitation of Windows Protected Process Light (PPL) by leveraging COM-to-.NET redirection and reflection techniques for code injection. This PoC showcases bypassing code integrity checks and loading malicious payloads in highly protected processes such as LSASS. Based on research from James Forshaw.
Languages: C++ (100.0%)
Bypass_COM_PPL
Bypass_COM_PPL
...
ComDotNetExploit.sln
ComDotNetExploit.sln
README.md
README.md
> README.md
# PPL Exploit PoC (Proof of Concept)
This repository contains a C++ Proof of Concept (PoC) demonstrating the exploitation of Windows Protected Process Light (PPL) using COM-to-.NET redirection and reflection techniques for code injection. The exploit bypasses code integrity checks and injects a malicious payload into highly protected processes such as LSASS.
The PoC leverages registry manipulation and the IDispatch interface, enabling code injection into a PPL process like `svchost.exe` or others with similar protections. This technique is inspired by James Forshaw's (a.k.a. [@tiraniddo](https://infosec.exchange/@tiraniddo)) research into exploiting .NET reflection and the bypass of signature checks on PPL processes.
I wrote a blog post about this tool:
- __Blog post part__: [Abusing IDispatch for Trapped COM Object Access & Injecting into PPL Processes](https://mohamed-fakroud.gitbook.io/red-teamings-dojo/abusing-idispatch-for-trapped-com-object-access-and-injecting-into-ppl-processes)
---
## **Usage**
**Run the Exploit:**
The following command will load the malicious payload into the svchost PPL process:
```bash
ComDotNetExploit.exe <DLL Path> <Static Class Name>
```
---
## **Analysis**
The PoC demonstrates how registry manipulation can allow COM-to-.NET redirection to execute unmanaged code in a .NET process. By enabling reflection via `Assembly.Load(byte[])`, we bypass the SEC_IMAGE code integrity checks that are typically enforced during image section creation in PPL processes.
The exploit showcases the following attack vectors:
- **COM-to-.NET redirection**: By manipulating registry keys, we can redirect COM activation to a .NET object, which allows us to load and execute .NET assemblies in a protected process context.
- **Bypassing code integrity checks**: Using .NET Reflection (`Assembly.Load(byte[])`), we bypass the normal image section validation that occurs in PPL processes, allowing us to load unsigned, malicious code.
---
## **Credit**
This PoC is largely inspired by the research conducted by **James Forshaw** (a.k.a. [@tiraniddo](https://infosec.exchange/@tiraniddo)).
- **[Windows Bug Class: Accessing Trapped COM Objects with IDispatch](https://googleprojectzero.blogspot.com/2025/01/windows-bug-class-accessing-trapped-com.html)**
---
## **Disclaimer**
This PoC is intended for educational purposes only. It should not be used for any illegal or malicious activities. I do not take any responsibility for misuse or unintended consequences arising from the use of this code.
|
https://github.com/hedge-dev/XenosRecomp
|
XenosRecomp
A tool for converting Xbox 360 shaders to HLSL.
Languages: C++ (76.0%), C (22.0%), CMake (2.0%)
XenosRecomp
XenosRecomp
thirdparty
thirdparty
...
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
CMakeSettings.json
CMakeSettings.json
LICENSE.md
LICENSE.md
> README.md
# XenosRecomp
XenosRecomp is a tool that converts Xbox 360 shader binaries to HLSL. The resulting files can be recompiled to DXIL and SPIR-V using the DirectX Shader Compiler (DXC) for use in Direct3D 12 (D3D12) and Vulkan.
The current implementation is designed around [Unleashed Recompiled](https://github.com/hedge-dev/UnleashedRecomp), a recompilation project that implements a translation layer for the renderer rather than emulating the Xbox 360 GPU. Unleashed Recompiled specific implementations are placed under the `UNLEASHED_RECOMP` preprocessor macro.
Users are expected to modify the recompiler to fit their needs. **Do not expect the recompiler to work out of the box.**
## Implementation Details
Several components of the recompiler are currently incomplete or missing. Unimplemented or inaccurate features exist mainly because they were either unnecessary for Unleashed Recompiled or did not cause visible issues.
### Shader Container
Xbox 360 shaders are stored in a container that includes constant buffer reflection data, definitions, interpolators, vertex declarations, instructions, and more. It has been reverse-engineered just enough for use in Unleashed Recompiled, but additional research may be needed for other games.
### Instructions
Vector/ALU instructions are converted directly and should work in most cases.
Issues might happen when instructions perform dynamic constant indexing on multiple operands.
Instructions that result in `INF` or `NaN` might not be handled correctly. Most operations are clamped to `FLT_MAX`, but their behavior has not been verified in all scenarios.
Dynamic register indexing is unimplemented. A possible solution is converting registers into an array that instructions dynamically index into, instead of treating them as separate local variables.
### Control Flow
Since HLSL does not support `goto`, control flow instructions are implemented using a `while` loop with a `switch` statement, where a local `pc` variable determines the currently executing block.
The current implementation has not been thoroughly tested, as Sonic Unleashed contains very few shaders with complex control flow. However, any issues should be relatively easy to fix if problematic cases can be found.
For shaders with simple control flow, the recompiler may choose to flatten it, removing the while loop and switch statements. This allows DXC to optimize the shader more efficiently.
### Constants
Both vertex and pixel shader stages use three constant buffers:
* Vertex shader constants: 4096 bytes (256 `float4` registers)
* Pixel shader constants: 3584 bytes (224 `float4` registers)
* Shared constants: Used specifically by Unleashed Recompiled
Vertex and pixel shader constants are copied directly from the guest render device, and shaders expect them in little-endian format.
Constant buffer registers are populated using reflection data embedded in the shader binaries. If this data is missing, the recompiler will not function. However, support can be added by defining a `float4` array that covers the entire register range.
Integer constants are unimplemented. If the target game requires them, you will need to make new constant buffer slots or append them to the existing ones.
Vertex and pixel shader boolean constants each contain 16 elements. These are packed into a 32-bit integer and stored in the shared constants buffer, where the Nth bit represents the value of the Nth boolean register. The Xbox 360 GPU supposedly supports up to 128 boolean registers, which may require increasing the size of the `g_Booleans` data type for other games.
All constant buffers are implemented as root constant buffers in D3D12, making them easy to upload to the GPU using a linear allocator. In Vulkan, the GPU virtual addresses of constant buffers are passed as push constants. Constants are accessed via preprocessor macros that load values from the GPU virtual addresses using `vk::RawBufferLoad`. These macros ensure the shader function body remains the same for both DXIL and SPIR-V.
Out-of-bounds dynamic constant accesses should return 0. However, since root constant buffers in D3D12 and raw buffer loads in Vulkan do not enforce this behavior, the shader developer must handle it. To solve this, each dynamic index access is clamped to the valid range, and out-of-bounds registers are forced to become 0.
### Vertex Fetch
A common approach to vertex fetching is passing vertex data as a shader resource view and building special shaders depending on the vertex declaration. Instead, Unleashed Recompiled converts vertex declarations into native D3D12/Vulkan input declarations, allowing vertex shaders to receive data as inputs. While this has its limitations, it removes the need for runtime shader permutation compilation based on vertex declarations.
Unleashed Recompiled endian swaps vertex data before uploading it to the GPU by treating buffers as arrays of 32-bit integers. This causes the element order for 8-bit and 16-bit vertex formats to be swizzled. While no visual errors have been observed for 8-bit formats, 16-bit formats get swizzled to YXWZ. This is corrected using a `g_SwappedTexcoords` variable in the shared constants buffer, where each bit indicates whether the corresponding `TEXCOORD` semantic requires re-swizzling. While this assumption holds for Sonic Unleashed, other games may require additional support for other semantics.
Xbox 360 supports the `R11G11B10` vertex format, which is unsupported on desktop hardware. The recompiler implements this by using a specialization constant that manually unpacks this format for `NORMAL`, `TANGENT` and `BINORMAL` semantics in the vertex shader. Similar to `TEXCOORD` swizzling, this assumes the format is only used for these semantics.
Certain semantics are forced to be `uint4` instead of `float4` for specific shaders in Sonic Unleashed. This is also something that needs to be handled manually for other games.
Instanced geometry is handled completely manually on the Xbox 360. In Sonic Unleashed, the index buffer is passed as a vertex stream, and shaders use it to arbitrarily fetch vertex data, relying on a `g_IndexCount` constant to determine the index of the current instance. Unleashed Recompiled handles this by expecting instanced data to be in the second vertex stream and the index buffer to be in the `POSITION1` semantic. This behavior is completely game specific and must be manually implemented for other games.
Vulkan vertex locations are currently hardcoded for Unleashed Recompiled, chosen based on Sonic Unleashed's shaders while taking the 16 location limit into account. A generic solution would assign unique locations per vertex shader and dynamically create vertex declarations at runtime.
Mini vertex fetch instructions and vertex fetch bindings are unimplemented.
### Textures & Samplers
Textures and samplers use a bindless approach. Descriptor indices are stored in the shared constant buffer, with separate indices for each texture type to prevent mismatches in the shader. 1D textures are unimplemented but could be added easily.
Several texture fetch features, such as specifying LOD levels or sampler filters, are unimplemented. Currently, only the pixel offset value is supported, which is primarily used for shadow mapping.
Some Xbox 360 sampler types may be unsupported on desktop hardware. These cases are unhandled and require specialized implementations in the recompiler.
Cube textures are normally sampled using the `cube` instruction, which computes the face index and 2D texture coordinates. This can be implemented on desktop hardware by sampling `Texture2DArray`, however this lacks linear filtering across cube edges. The recompiler instead stores an array of cube map directions locally. Each `cube` instruction stores a direction in this array, and the output register holds the direction index. When the shader performs a texture fetch, the direction is dynamically retrieved from the array and used in `TextureCube` sampling. DXC optimizes this array away, ensuring the final DXIL/SPIR-V shader uses the direction directly.
This approach works well for simple control flow but may cause issues with complex shaders where optimizations might fail, leading to the array actually being dynamically indexed. A proper solution could implement the `cube` instruction exactly as the hardware does, and then reverse this computation during texture sampling. I chose not to do this approach in the end, as DXC was unable to optimize away redundant computations due to the lossy nature of the calculation.
### Specialization Constants
The recompiler implements several specialization constants, primarily as enhancements for Unleashed Recompiled. Currently, these are simple flags that enable or disable specific shader behaviors. The generic ones include:
- A flag indicating that the `NORMAL`, `TANGENT`, and `BINORMAL` semantics use the `R11G11B10` vertex format, enabling manual unpacking in the vertex shader.
- A flag indicating that the pixel shader performs alpha testing. Since modern desktop hardware lacks a fixed function pipeline for alpha testing, this flag inserts a "less than alpha threshold" check at the end of the pixel shader. Additional comparison types may need to be implemented depending on the target game.
While specialization constants are straightforward to implement in SPIR-V, DXIL lacks native support for them. This is solved by compiling shaders as libraries with a declared, but unimplemented function that returns the specialization constant value. At runtime, Unleashed Recompiled generates an implementation of this function, compiles it into a library, and links it with the shader to produce a final specialized shader binary. For more details on this technique, [check out this article](https://therealmjp.github.io/posts/dxil-linking/).
### Other Unimplemented Features
* Memory export.
* Point size.
* Possibly more that I am not aware of.
## Usage
Shaders can be directly converted to HLSL by providing the input file path, output HLSL file path, and the path to the `shader_common.h` file located in the XenosRecomp project directory:
```
XenosRecomp [input shader file path] [output HLSL file path] [header file path]
```
### Shader Cache
Alternatively, the recompiler can process an entire directory by scanning for shader binaries within the specified path. In this mode, valid shaders are converted and recompiled into a DXIL/SPIR-V cache, formatted for use with Unleashed Recompiled. This cache is then exported as a .cpp file for direct embedding into the executable:
```
XenosRecomp [input directory path] [output .cpp file path] [header file path]
```
At runtime, shaders are mapped to their recompiled versions using a 64-bit XXH3 hash lookup. This scanning method is particularly useful for games that store embedded shaders within executables or uncompressed archive formats.
SPIR-V shaders are compressed using smol-v to improve zstd compression efficiency, while DXIL shaders are compressed as-is.
## Building
The project requires CMake 3.20 and a C++ compiler with C++17 support to build. While compilers other than Clang might work, they have not been tested. Since the repository includes submodules, ensure you clone it recursively.
## Special Thanks
This recompiler would not have been possible without the [Xenia](https://github.com/xenia-project/xenia) emulator. Nearly every aspect of the development was guided by referencing Xenia's shader translator and research.
## Final Words
I hope this recompiler proves useful in some way to help with your own recompilation efforts! While the implementation isn't as generic as I hoped it would be, the optimization opportunities from game specific implementations were too significant to ignore and paid off in the end.
If you find and fix mistakes in the recompiler or successfully implement missing features in a generic way, contributions would be greatly appreciated.
|
https://github.com/grimdoomer/Xbox360BadUpdate
|
Xbox360BadUpdate
Software only hypervisor exploit for Xbox 360
Languages: Assembly (69.7%), C# (16.4%), C++ (12.7%), Batchfile (1.2%)
Common
Common
Stage1
Stage1
Stage2
Stage2
Stage3
Stage3
Stage4
Stage4
...
.gitignore
.gitignore
README.md
README.md
build_exploit.bat
build_exploit.bat
update_data.bin
update_data.bin
xke_update.bin
xke_update.bin
> README.md

Bad Update is a non-persistent software only hypervisor exploit for Xbox 360 that works on the latest (17559) software version. This repository contains the exploit files that can be used on an Xbox 360 console to run unsigned code. This exploit can be triggered using one of the following games:
- Tony Hawk's American Wasteland (NTSC/PAL/RF see [here](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/Tony-Hawk's-American-Wasteland#compatible-versions) for how to identify your version/region)
- Rock Band Blitz (trial or full game, see [here](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/Rock-Band-Blitz) for more information)
**This exploit is NOT persistent!** This means your console will only be in a hacked state (able to run homebrew/unsigned code) for as long as it's kept on. **Once you reboot or power off your console you'll need to run the exploit again**. The exploit cannot be made persistent.
**Your Xbox 360 console must be on dashboard version 17559 in order to use this exploit**. While the exploit can be ported to any system software version I have only built the exploit for the 17559 dashboard version.
For information on how to use the exploit see the Quick Start section below. For information on how the exploit works or how to compile it from scratch see the following wiki pages:
- [Compiling](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/Compiling)
- [Exploit Details](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/Exploit-Details)
# Quick Start
To run the Bad Update exploit you'll need one of the supported games listed above and a USB stick. The following steps give a brief overview of how to run the exploit, for more detailed steps please see the [How To Use](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/How-To-Use) wiki page.
1. Download the Xbox360BadUpdate-Retail-USB.zip file from the releases section and extract the files.
2. Format a USB stick to FAT32.
3. Copy the contents of the folder matching the game you want to use for the exploit to the root of the USB stick.
* If you're using Tony Hawk's American Wasteland copy the contents of the Tony Hawk's American Wasteland folder to the root of the USB stick.
* If you're using Rock Band Blitz copy the contents of the Rock Band Blitz folder to the root of the USB stick.
* The root of the USB stick should contain the following files/folders: BadUpdatePayload, Content, name.txt.
4. Place the unsigned executable you want to run when the exploit triggers into the BadUpdatePayload folder on the USB stick and name it "default.xex" (replace any existing file in the folder). This xex file must be in retail format and have all restrictions removed (see the wiki for how to do this).
5. Insert the USB stick into your Xbox 360 console and power it on.
6. Sign into the Player 1 profile and run the game you're using to trigger the exploit.
* If you're using Rock Band Blitz, there is no profile included. You can use any local/offline profile, or run the game completely signed out.
7. Follow the instructions for the game you chose to load the hacked game save file and begin the exploit process.
8. The console's ring of light will flash different colors/segments during the exploit process to indicate progress. For information on what the different values mean see the [LED Patterns and Meanings](https://github.com/grimdoomer/Xbox360BadUpdate/wiki/How-To-Use#led-patterns-and-meanings) section of the wiki.
9. Once the exploit triggers successfully the RoL should be fully lit in green. The hypervisor has now been patched to run unsigned executables and your unsigned default.xex file will be run.
The exploit has a 30% success rate and can take up to 20 minutes to trigger successfully. If after 20 minutes the exploit hasn't triggered you'll need to power off your Xbox 360 console and repeat the process from step 5.
# FAQ
**Q: Why do I have to re-run the exploit every time I turn my console on?**
A: The exploit is not-persistent, it only works for as long as the console is kept on. Once the console is turned off or rebooted you'll need to run the exploit again.
**Q: What does this provide over the RGH Hack/should I use this instead of RGH?**
A: This is a software only exploit that doesn't require you open your console or perform any soldering to use. Other than that it's inferior to the RGH exploit in every way and should be considered a "proof of concept" and not something you use in place of RGH.
**Q: Can this be turned into a softmod?**
A: No, the Xbox 360 boot chain is very secure with no attack surface to try and exploit. There will never exist a software only boot-to-hacked-state exploit akin to a "softmod".
**Q: Does this work on winchester consoles?**
A: Yes it has been confirmed to work on winchester consoles.
**Q: Does this work with the Original Xbox version of Tony Hawk's American Wasteland?**
A: No, it only works with the Xbox 360 version.
**Q: Can \<insert other skateboarding game here> be used with this?**
A: No, the Tony Hawk save game exploit is specific to Tony Hawk's American Wasteland and has nothing to do with it being a skateboarding game.
**Q: Can \<insert other music game here> be used with this?**
A: No, the Rock Band save game exploit is specific to Rock Band Blitz and has nothing to do with it being a music game.
**Q: I ran the exploit and nothing happened?**
A: The exploit has a 30% success rate. If after running for 20 minutes the exploit hasn't triggered you'll need to reboot your console and try again.
**Q: Why does the exploit only run a single unsigned xex?**
A: My goal was to hack the hypervisor, not to develop a robust all-in-one homebrew solution. Someone else will need to develop a post-exploit executable that patches in all the quality of life things you would get from something like the RGH exploit.
**Q: Why does the exploit take so long to trigger/have a low success rate?**
A: The exploit is a race condition that requires precise timing and several other conditions to be met for it to trigger successfully. As such it can take a while for that to happen.
|
https://github.com/YOUNG-bit/open_semantic_slam
|
open_semantic_slam
ICRA2025: OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding
Languages: Cuda (52.7%), Python (34.5%), C++ (11.5%)
configs
configs
media
media
scene
scene
submodules
submodules
...
.gitignore
.gitignore
LICENSE
LICENSE
LICENSE copy
LICENSE copy
README.md
README.md
final_vis.py
final_vis.py
> README.md
<h1 align="center"> OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding </h1>
<h3 align="center"> Dianyi Yang, Yu Gao, Xihan Wang, Yufeng Yue, Yi Yang∗, Mengyin Fu </h3>
<!-- <h3 align="center">
<a href="https://arxiv.org/abs/2408.12677">Paper</a> | <a href="https://youtu.be/rW8o_cRPZBg">Video</a> | <a href="https://gs-fusion.github.io/">Project Page</a>
</h3> -->
<h3 align="center">
<a href="https://www.youtube.com/watch?v=uNJ4vTpfGU0">Video</a> | <a href="https://young-bit.github.io/opengs-github.github.io/">Project Page</a>
</h3>
<p align="center">
<a href="">
<img src="./media/github.gif" alt="teaser" width="100%">
</a>
</p>
<p align="center"> All the reported results are obtained from a single Nvidia RTX 4090 GPU. </p>
Abstract: *Recent advancements in 3D Gaussian Splatting have significantly improved the efficiency and quality of dense semantic SLAM. However, previous methods are generally constrained by limited-category pre-trained classifiers and implicit semantic representation, which hinder their performance in open-set scenarios and restrict 3D object-level scene understanding. To address these issues, we propose OpenGSSLAM, an innovative framework that utilizes 3D Gaussian representation to perform dense semantic SLAM in open-set environments. Our system integrates explicit semantic labels derived from 2D foundational models into the 3D Gaussian framework, facilitating robust 3D object-level scene understanding. We introduce Gaussian Voting Splatting to enable fast 2D label map rendering and scene updating. Additionally, we propose a Confidence-based 2D Label Consensus method to ensure consistent labeling across multiple views. Furthermore, we employ a Segmentation Counter Pruning strategy to improve the accuracy of semantic scene representation. Extensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of our method in scene understanding, tracking, and mapping, achieving 10× faster semantic rendering and 2× lower storage costs compared to existing methods.*
## Environments
Install requirements
```bash
conda create -n opengsslam python==3.9
conda activate opengsslam
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
```
Install submodules
```bash
conda activate opengsslam
pip install submodules/diff-gaussian-rasterization
pip install submodules/simple-knn
```
## Scene Interaction Demo
### 1. Download our pre-constructed Semantic 3D Gaussian scenes for the Replica dataset from the following link: [Driver](https://drive.google.com/drive/folders/1-bGoaZQRRKLHXFQGq3_6gu1KXhoePbQv?usp=drive_link)
### 2. Scene Interaction
```
python ./final_vis.py --scene_npz [download_path]/room1.npz
```
Here, users can click on any object in the scene to interact with it and use our Gaussian Voting method for real-time semantic rendering. Note that we use the **pynput** library to capture mouse clicks, which retrieves the click position on **the entire screen**. To map this position to the display window, we subtract an offset `(x_off, y_off)`, representing the window’s top-left corner on the screen. All tests were conducted on an Ubuntu system with a 2K resolution.
### *Key Press Description*
- **T**: Toggle between color and label display modes.
- **J**: Toggle between showing all objects or a single object.
- **K**: Capture the current view.
- **A**: Translate the object along the x-axis by +0.01.
- **S**: Translate the object along the y-axis by +0.01.
- **D**: Translate the object along the z-axis by +0.01.
- **Z**: Translate the object along the x-axis by -0.01.
- **X**: Translate the object along the y-axis by -0.01.
- **C**: Translate the object along the z-axis by -0.01.
- **F**: Rotate the object around the x-axis by +1 degree.
- **G**: Rotate the object around the y-axis by +1 degree.
- **H**: Rotate the object around the z-axis by +1 degree.
- **V**: Rotate the object around the x-axis by -1 degree.
- **B**: Rotate the object around the y-axis by -1 degree.
- **N**: Rotate the object around the z-axis by -1 degree.
- **O**: Output the current camera view matrix.
- **M**: Switch to the next mapping camera view.
- **L**: Increase the scale of all Gaussians.
- **P**: Downsample Gaussians using a voxel grid.
## SLAM Source Code
Coming soon!
<!-- ## Note
This repository contains the code used in the paper "OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding". The full code will be released upon acceptance of the paper. -->
## Acknowledgement
We sincerely thank the developers and contributors of the many open-source projects that our code is built upon.
* [GS_ICP_SLAM](https://github.com/Lab-of-AI-and-Robotics/GS_ICP_SLAM)
* [SplaTAM](https://github.com/spla-tam/SplaTAM/tree/main)
## Citation
If you find our paper and code useful, please cite us:
```bibtex
@article{yang2025opengs,
title={OpenGS-SLAM: Open-Set Dense Semantic SLAM with 3D Gaussian Splatting for Object-Level Scene Understanding},
author={Yang, Dianyi and Gao, Yu and Wang, Xihan and Yue, Yufeng and Yang, Yi and Fu, Mengyin},
journal={arXiv preprint arXiv:2503.01646},
year={2025}
}
```
|
https://github.com/LegNeato/rust-gpu-chimera
|
rust-gpu-chimera
Demo project showing a single Rust codebase running on CPU and directly on GPUs
Languages: Rust (82.6%), Shell (17.4%)
.cargo
.cargo
.github/workflows
.github/workflows
kernel
kernel
shared
shared
src
src
...
.gitignore
.gitignore
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
README.md
README.md
build.rs
build.rs
> README.md
# Rust GPU Chimera Demo
A cross-platform demo of a single Rust codebase running on both the CPU and GPU via
CUDA, Vulkan, Metal, and DirectX. There are no shader or kernel languages used, only
Rust.
### Supported Configurations
| Platform | Rust Features | Host | Backend | Driver | How it Works | Status |
| ------------ | ------------- | ------ | ------- | ------------- | -------------------- | ------------------ |
| **Linux** | - | CPU | - | - | Rust → Native | ✅ Working |
| Linux | `wgpu` | [wgpu] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Linux | `ash` | [ash] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Linux | `cuda` | [cust] | CUDA | Native | Rust → NVVM → PTX | ✅ Working |
| **macOS** | - | CPU | - | - | Rust → Native | ✅ Working |
| macOS | `wgpu` | [wgpu] | Metal | Metal | Rust → SPIR-V → MSL | ✅ Working |
| macOS | `wgpu,vulkan` | [wgpu] | Vulkan | [MoltenVK] | Rust → SPIR-V | ✅ Working |
| macOS | `wgpu,vulkan` | [wgpu] | Vulkan | [SwiftShader] | Rust → SPIR-V | ✅ Working |
| macOS | `ash` | [ash] | Vulkan | [MoltenVK] | Rust → SPIR-V | ✅ Working |
| macOS | `ash` | [ash] | Vulkan | [SwiftShader] | Rust → SPIR-V | ✅ Working |
| macOS | `cuda` | [cust] | CUDA | - | - | ❌ Unavailable[^1] |
| **Windows** | - | CPU | - | - | Rust → Native | ✅ Working |
| Windows | `wgpu` | [wgpu] | DX12 | Native | Rust → SPIR-V → HLSL | ✅ Working |
| Windows | `wgpu,vulkan` | [wgpu] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Windows | `wgpu,vulkan` | [wgpu] | Vulkan | [SwiftShader] | Rust → SPIR-V | ✅ Working |
| Windows | `ash` | [ash] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Windows | `ash` | [ash] | Vulkan | [SwiftShader] | Rust → SPIR-V | ✅ Working |
| Windows | `cuda` | [cust] | CUDA | Native | Rust → NVVM → PTX | ✅ Working |
| **Android** | - | CPU | - | - | Rust → Native | ✅ Working |
| Android | `wgpu` | [wgpu] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Android | `ash` | [ash] | Vulkan | Native | Rust → SPIR-V | ✅ Working |
| Android | `cuda` | [cust] | CUDA | - | - | ❌ Unavailable[^2] |
| **iOS** | - | CPU | - | - | Rust → Native | ✅ Working |
| iOS | `wgpu` | [wgpu] | Metal | Metal | Rust → SPIR-V → MSL | 🔷 Should work |
| iOS | `wgpu,vulkan` | [wgpu] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| iOS | `ash` | [ash] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| iOS | `cuda` | [cust] | CUDA | - | - | ❌ Unavailable[^1] |
| **tvOS** | - | CPU | - | - | Rust → Native | ✅ Working |
| tvOS | `wgpu` | [wgpu] | Metal | Metal | Rust → SPIR-V → MSL | 🔷 Should work |
| tvOS | `wgpu,vulkan` | [wgpu] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| tvOS | `ash` | [ash] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| tvOS | `cuda` | [cust] | CUDA | - | - | ❌ Unavailable[^1] |
| **visionOS** | - | CPU | - | - | Rust → Native | ✅ Working |
| visionOS | `wgpu` | [wgpu] | Metal | Metal | Rust → SPIR-V → MSL | 🔷 Should work |
| visionOS | `wgpu,vulkan` | [wgpu] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| visionOS | `ash` | [ash] | Vulkan | [MoltenVK] | Rust → SPIR-V | 🔷 Should work |
| visionOS | `cuda` | [cust] | CUDA | - | - | ❌ Unavailable[^1] |
[^1]:
CUDA is not supported on macOS/iOS/tvOS/visionOS.
[ZLUDA](https://github.com/vosen/ZLUDA) could potentially enable CUDA on these
platforms in the future.
[^2]:
CUDA is not supported on Android.
[ZLUDA](https://github.com/vosen/ZLUDA) could potentially enable CUDA on Android in
the future.
## Running the Demo
The demo runs a bitonic sort on various data types (u32, i32, f32) with different sizes
and configurations.
### Linux
```bash
# CPU execution
cargo run --release
# Vulkan via wgpu
cargo run --release --features wgpu
# Vulkan via ash
cargo run --release --features ash
# CUDA (NVIDIA GPU required)
cargo run --release --features cuda
```
### macOS
```bash
# CPU execution
cargo run --release
# Metal via wgpu (SPIR-V → MSL translation)
cargo run --release --features wgpu
# Vulkan via wgpu (requires MoltenVK)
cargo run --release --features wgpu,vulkan
# Vulkan via ash (requires MoltenVK)
cargo run --release --features ash
```
### Windows
```bash
# CPU execution
cargo run --release
# DirectX 12 via wgpu (SPIR-V → HLSL translation)
cargo run --release --features wgpu
# Vulkan via wgpu
cargo run --release --features wgpu,vulkan
# Vulkan via ash
cargo run --release --features ash
# CUDA (NVIDIA GPU required)
cargo run --release --features cuda
```
Instead of `cargo run` you can replace it with `cargo test` to run unit tests for the
same configuration.
## Project Structure
```
rust-gpu-chimera-demo/
├── kernel/ # Compute kernel logic and entrypoints
│ └── src/
│ └── lib.rs
├── shared/ # Code that runs on both the CPU and GPU
│ └── src/
│ └── lib.rs
├── src/
│ ├── runners/ # Code that runs on the CPU/host and interfaces with the GPU
│ │ ├── cpu.rs
│ │ ├── cuda.rs
│ │ ├── wgpu.rs
│ │ └── ash.rs
│ ├── lib.rs
│ └── main.rs # Demo application binary
└── build.rs # Kernel compilation orchestration
```
[wgpu]: https://github.com/gfx-rs/wgpu
[ash]: https://github.com/ash-rs/ash
[cust]: https://github.com/Rust-GPU/Rust-CUDA/tree/main/crates/cust
[MoltenVK]: https://github.com/KhronosGroup/MoltenVK
[SwiftShader]: https://github.com/google/swiftshader
|
https://github.com/Azalea8/riscv_cpu
|
riscv_cpu
riscv指令集,单周期以及五级流水线CPU
Languages: Verilog (94.6%), Assembly (3.3%), Python (1.8%), C (0.3%)
_images
_images
five_pipeline_cpu
five_pipeline_cpu
single_cycle_cpu
single_cycle_cpu
...
README.md
README.md
> README.md
# RV32I CPU
“陛下,我们把这台计算机命名为‘秦一号’。请看,那里,中心部分,是CPU,是计算机的核心计算元件。由您最精锐的五个军团构成,对照这张图您可以看到里面的加法器、寄存器、堆栈存贮器;外围整齐的部分是内存,构建这部分时我们发现人手不够,好在这部分每个单元的动作最简单,就训练每个士兵拿多种颜色的旗帜,组合起来后,一个人就能同时完成最初二十个人的操作,这就使内存容量达到了运行‘秦-1.0’操作系统的最低要求;”
> <div>
>
> 《三体》, 刘慈欣
>
> </div>
# CPU:状态机
*CPU* 的核心部件有两个,寄存器组和内存。
无穷无尽的指令在 *CPU* 中流过,改变寄存器和内存的值, *CPU* 就是这样简单。
所以我们可以用 *C* 语言来模拟状态机,更好地便于我们调试。
[300行的简单riscv模拟器](https://github.com/Azalea8/riscv_sim)
# RISC-V指令集简介
[RISC-V](https://riscv.org/) 是由 *UC Berkeley* 推出的一套开源指令集。
该指令集包含一系列的基础指令集和可选扩展指令集。在本实验中我们主要关注其中的 *32* 位基础指令集 *RV32I* 。
*RV32I* 指令集中包含了 *40* 条基础指令,涵盖了整数运算、存储器访问、控制转移和系统控制几个大类。本项目中没有实现系统控制的 *ECALL/EBREAK*、内存同步 *FENCE* 指令及 *CSR* 访问指令,所以共实现了 *37* 条指令。
*RV32I* 中的程序计数器 *PC* 及 *32* 个通用寄存器均是 *32* bit,访存地址线宽度也是 *32* 位。*RV32I* 的指令长度也统一为 *32* 位。
### RV32I指令编码
*RV32I* 的指令编码非常规整,分为六种类型,其中四种类型为基础编码类型,其余两种是变种:
> <div>
>
> * **R-Type** :为寄存器操作数指令,含2个源寄存器rs1 ,rs2和一个目的寄存器 rd。
>
> * **I-Type** :为立即数操作指令,含1个源寄存器和1个目的寄存器和1个12bit立即数操作数
>
> * **S-Type** :为存储器写指令,含2个源寄存器和一个12bit立即数。
>
> * **B-Type** :为跳转指令,实际是 *S-Type* 的变种。与 *S-Type* 主要的区别是立即数编码。
>
> * **U-Type** :为长立即数指令,含一个目的寄存器和20bit立即数操作数。
>
> * **J-Type** :为长跳转指令,实际是 *U-Type* 的变种。与 *U-Type* 主要的区别是立即数编码。
> </div>
其中四种基本格式如图

在指令编码中,opcode必定为指令低7bit,源寄存器rs1,rs2和目的寄存器rd也都在特定位置出现,所以指令解码非常方便。
### RV32I中的通用寄存器
RV32I共32个32bit的通用寄存器x0~x31(寄存器地址为5bit编码),其中寄存器x0中的内容总是0,无法改变。
其他寄存器的别名和寄存器使用约定参见表。
需要注意的是,部分寄存器在函数调用时是由调用方(Caller)来负责保存的,部分寄存器是由被调用方(Callee)来保存的。在进行C语言和汇编混合编程时需要注意。
<table class="docutils align-default" id="tab-regname">
<caption><span class="caption-number">Table 12 </span><span class="caption-text">RV32I中通用寄存器的定义与用法</span></caption>
<thead>
<tr class="row-odd"><th class="head">
Register
</th>
<th class="head">
Name
</th>
<th class="head">
Use
</th>
<th class="head">
Saver
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
x0
</td>
<td>
zero
</td>
<td>
Constant 0
</td>
<td>
–
</td>
</tr>
<tr class="row-odd"><td>
x1
</td>
<td>
ra
</td>
<td>
Return Address
</td>
<td>
Caller
</td>
</tr>
<tr class="row-even"><td>
x2
</td>
<td>
sp
</td>
<td>
Stack Pointer
</td>
<td>
Callee
</td>
</tr>
<tr class="row-odd"><td>
x3
</td>
<td>
gp
</td>
<td>
Global Pointer
</td>
<td>
–
</td>
</tr>
<tr class="row-even"><td>
x4
</td>
<td>
tp
</td>
<td>
Thread Pointer
</td>
<td>
–
</td>
</tr>
<tr class="row-odd"><td>
x5~x7
</td>
<td>
t0~t2
</td>
<td>
Temp
</td>
<td>
Caller
</td>
</tr>
<tr class="row-even"><td>
x8
</td>
<td>
s0/fp
</td>
<td>
Saved/Frame pointer
</td>
<td>
Callee
</td>
</tr>
<tr class="row-odd"><td>
x9
</td>
<td>
s1
</td>
<td>
Saved
</td>
<td>
Callee
</td>
</tr>
<tr class="row-even"><td>
x10~x11
</td>
<td>
a0~a1
</td>
<td>
Arguments/Return Value
</td>
<td>
Caller
</td>
</tr>
<tr class="row-odd"><td>
x12~x17
</td>
<td>
a2~a7
</td>
<td>
Arguments
</td>
<td>
Caller
</td>
</tr>
<tr class="row-even"><td>
x18~x27
</td>
<td>
s2~s11
</td>
<td>
Saved
</td>
<td>
Callee
</td>
</tr>
<tr class="row-odd"><td>
x28~x31
</td>
<td>
t3~t6
</td>
<td>
Temp
</td>
<td>
Caller
</td>
</tr>
</tbody>
</table>
### RV32I中的指令类型
本实验中需要实现的RV32I指令含包含以下三类:
* **整数运算指令** :可以是对两个源寄存器操作数,或一个寄存器一个立即数操作数进行计算后,结果送入目的寄存器。运算操作包括带符号数和无符号数的算术运算、移位、逻辑操作和比较后置位等。
* **控制转移指令** :条件分支包括 *beq* ,*bne* 等等,根据寄存器内容选择是否跳转。无条件跳转指令会将本指令下一条指令地址 *PC+4* 存入 *rd* 中供函数返回时使用。
* **存储器访问指令** :对内存操作是首先寄存器加立即数偏移量,以计算结果为地址读取/写入内存。读写时可以是按32位字,16位半字或8位字节来进行读写。读写时区分无符号数和带符号数。注意:RV32I为 [Load/Store](https://en.wikipedia.org/wiki/Load%E2%80%93store_architecture) 型架构,内存中所有数据都需要先 *load* 进入寄存器才能进行操作,不能像 *x86* 一样直接对内存数据进行算术处理。
### 整数运算指令
RV32I的整数运算指令包括21条不同的指令,其指令编码方式参见表

这些整数运算指令所需要完成的操作参见表。
<table class="docutils align-default" id="tab-integerop">
<caption><span class="caption-number"></span><span class="caption-text">整数运算指令操作说明</span></caption>
<thead>
<tr class="row-odd"><th class="head">
指令
</th>
<th class="head">
行为
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
lui rd,imm20
</td>
<td>
<span class="math notranslate nohighlight">将 20 位的立即数左移12位,低 12 位补零,并写回寄存器 rd 中</span>
</td>
</tr>
<tr class="row-odd"><td>
auipc rd,imm20
</td>
<td>
<span class="math notranslate nohighlight">将 20 位的立即数左移12位,低 12 位补零,将得到的 32 位数与 pc 的值相加,最后写回寄存器 rd 中</span>
</td>
</tr>
<tr class="row-even"><td>
addi rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数加法</span>
</td>
</tr>
<tr class="row-odd"><td>
slti rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数有符号小于比较</span>
</td>
</tr>
<tr class="row-even"><td>
sltiu rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数无符号小于比较</span>
</td>
</tr>
<tr class="row-odd"><td>
xori rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数逻辑异或</span>
</td>
</tr>
<tr class="row-even"><td>
ori rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数逻辑或</span>
</td>
</tr>
<tr class="row-odd"><td>
andi rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">立即数逻辑与</span>
</td>
</tr>
<tr class="row-even"><td>
slli rd,rs1,shamt
</td>
<td>
<span class="math notranslate nohighlight">立即数逻辑左移</span>
</td>
</tr>
<tr class="row-odd"><td>
srli rd,rs1,shamt
</td>
<td>
<span class="math notranslate nohighlight">立即数逻辑右移</span>
</td>
</tr>
<tr class="row-even"><td>
srai rd,rs1,shamt
</td>
<td>
<span class="math notranslate nohighlight">立即数算数右移</span>
</td>
</tr>
<tr class="row-odd"><td>
add rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">加法</span>
</td>
</tr>
<tr class="row-even"><td>
sub rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">减法</span>
</td>
</tr>
<tr class="row-odd"><td>
sll rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">逻辑左移</span>
</td>
</tr>
<tr class="row-even"><td>
slt rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">有符号小于比较</span>
</td>
</tr>
<tr class="row-odd"><td>
sltu rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">无符号小于比较</span>
</td>
</tr>
<tr class="row-even"><td>
xor rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">逻辑异或</span>
</td>
</tr>
<tr class="row-odd"><td>
srl rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">逻辑右移</span>
</td>
</tr>
<tr class="row-even"><td>
sra rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">算数右移</span>
</td>
</tr>
<tr class="row-odd"><td>
or rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">逻辑或</span>
</td>
</tr>
<tr class="row-even"><td>
and rd,rs1,rs2
</td>
<td>
<span class="math notranslate nohighlight">逻辑与</span>
</td>
</tr>
</tbody>
</table>
基本的整数运算指令并没有完全覆盖到所有的运算操作。RV32I中基本指令集可以通过伪指令或组合指令的方式来实现基本指令中未覆盖到的功能,具体可以参考 常见伪指令 节。
### 控制转移指令
RV32I中包含了6条分支指令和2条无条件转移指令。图列出了这些控制转移指令的编码方式。

<table class="docutils align-default" id="tab-branchop">
<caption><span class="caption-number"></span><span class="caption-text">控制转移指令操作说明</span></caption>
<thead>
<tr class="row-odd"><th class="head">
指令
</th>
<th class="head">
行为
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
jal rd,imm20
</td>
<td>
<span class="math notranslate nohighlight">将 PC+4 的值保存到 rd 寄存器中,然后设置 PC = PC + offset</span>
</td>
</tr>
<tr class="row-odd"><td>
jalr rd,rs1,imm12
</td>
<td>
<span class="math notranslate nohighlight">将 PC+4 保存到 rd 寄存器中,然后设置 PC = rs1 + imm</span>
</td>
</tr>
<tr class="row-even"><td>
beq rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">相等跳转</span>
</td>
</tr>
<tr class="row-odd"><td>
bne rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">不等跳转</span>
</td>
</tr>
<tr class="row-even"><td>
blt rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">小于跳转</span>
</td>
</tr>
<tr class="row-odd"><td>
bge rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">大于等于跳转</span>
</td>
</tr>
<tr class="row-even"><td>
bltu rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">无符号小于跳转</span>
</td>
</tr>
<tr class="row-odd"><td>
bgeu rs1,rs2,imm12
</td>
<td>
<span class="math notranslate nohighlight">无符号大于等于跳转</span>
</td>
</tr>
</tbody>
</table>
### 存储器访问指令
RV32I提供了按字节、半字和字访问存储器的8条指令。所有访存指令的寻址方式都是寄存器间接寻址方式,访存地址可以不对齐4字节边界,但是在实现中可以要求访存过程中对齐4字节边界。在读取单个字节或半字时,可以按要求对内存数据进行符号扩展或无符号扩展后再存入寄存器。

<table class="docutils align-default" id="tab-memop">
<caption><span class="caption-number"></span><span class="caption-text">存储访问指令操作说明</span></caption>
<thead>
<tr class="row-odd"><th class="head">
指令
</th>
<th class="head">
行为
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
lb rd,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">字节加载</span>
</td>
</tr>
<tr class="row-odd"><td>
lh rd,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">半字加载</span>
</td>
</tr>
<tr class="row-even"><td>
lw rd,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">字加载</span>
</td>
</tr>
<tr class="row-odd"><td>
lbu rd,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">无符号字节加载</span>
</td>
</tr>
<tr class="row-even"><td>
lhu rd,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">无符号半字加载</span>
</td>
</tr>
<tr class="row-odd"><td>
sb rs2,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">字节储存</span>
</td>
</tr>
<tr class="row-even"><td>
sh rs2,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">半字储存</span>
</td>
</tr>
<tr class="row-odd"><td>
sw rs2,imm12(rs1)
</td>
<td>
<span class="math notranslate nohighlight">字储存</span>
</td>
</tr>
</tbody>
</table>
</section>
<section id="id7">
### 常见伪指令
RISC-V中规定了一些常用的伪指令。这些伪指令一般可以在汇编程序中使用,汇编器会将其转换成对应的指令序列。表介绍了RISC-V的常见伪指令列表。
<table class="docutils align-default" id="tab-pseudocode">
<caption><span class="caption-number"></span><span class="caption-text">常见伪指令说明</span></caption>
<thead>
<tr class="row-odd"><th class="head">
伪指令
</th>
<th class="head">
实际指令序列
</th>
<th class="head">
操作
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
nop
</td>
<td>
addi x0, x0, 0
</td>
<td>
空操作
</td>
</tr>
<tr class="row-odd"><td>
li rd,imm
</td>
<td><div class="line-block">
<div class="line">lui rd, imm[32:12]+imm[11]</div>
<div class="line">addi rd, rd, imm[11:0]</div>
</div>
</td>
<td><div class="line-block">
<div class="line">加载32位立即数,先加载高位,</div>
<div class="line">再加上低位,注意低位是符号扩展</div>
</div>
</td>
</tr>
<tr class="row-even"><td>
mv rd, rs
</td>
<td>
addi rd, rs
</td>
<td>
寄存器拷贝
</td>
</tr>
<tr class="row-odd"><td>
not rd, rs
</td>
<td>
xori rd, rs, -1
</td>
<td>
取反操作
</td>
</tr>
<tr class="row-even"><td>
neg rd, rs
</td>
<td>
sub rd, x0, rs
</td>
<td>
取负操作
</td>
</tr>
<tr class="row-odd"><td>
seqz rd, rs
</td>
<td>
sltiu rd, rs, 1
</td>
<td>
等于0时置位
</td>
</tr>
<tr class="row-even"><td>
snez rd, rs
</td>
<td>
sltu rd, x0, rs
</td>
<td>
不等于0时置位
</td>
</tr>
<tr class="row-odd"><td>
sltz rd, rs
</td>
<td>
slt rd, rs, x0
</td>
<td>
小于0时置位
</td>
</tr>
<tr class="row-even"><td>
sgtz rd, rs
</td>
<td>
slt rd, x0, rs
</td>
<td>
大于0时置位
</td>
</tr>
<tr class="row-odd"><td>
beqz rs, offset
</td>
<td>
beq rs, x0, offset
</td>
<td>
等于0时跳转
</td>
</tr>
<tr class="row-even"><td>
bnez rs, offset
</td>
<td>
bne rs, x0, offset
</td>
<td>
不等于0时跳转
</td>
</tr>
<tr class="row-odd"><td>
blez rs, offset
</td>
<td>
bge x0, rs, offset
</td>
<td>
小于等于0时跳转
</td>
</tr>
<tr class="row-even"><td>
bgez rs, offset
</td>
<td>
bge rs, x0, offset
</td>
<td>
大于等于0时跳转
</td>
</tr>
<tr class="row-odd"><td>
bltz rs, offset
</td>
<td>
blt rs, x0, offset
</td>
<td>
小于0时跳转
</td>
</tr>
<tr class="row-even"><td>
bgtz rs, offset
</td>
<td>
blt x0, rs, offset
</td>
<td>
大于0时跳转
</td>
</tr>
<tr class="row-odd"><td>
bgt rs, rt, offset
</td>
<td>
blt rt, rs, offset
</td>
<td>
rs大于rt时跳转
</td>
</tr>
<tr class="row-even"><td>
ble rs, rt, offset
</td>
<td>
bge rt, rs, offset
</td>
<td>
rs小于等于rt时跳转
</td>
</tr>
<tr class="row-odd"><td>
bgtu rs, rt, offset
</td>
<td>
bltu rt, rs, offset
</td>
<td>
无符号rs大于rt时跳转
</td>
</tr>
<tr class="row-even"><td>
bleu rs, rt, offset
</td>
<td>
bgeu rt, rs, offset
</td>
<td>
无符号rs小于等于rt时跳转
</td>
</tr>
<tr class="row-odd"><td>
j offset
</td>
<td>
jal x0, offset
</td>
<td>
无条件跳转,不保存地址
</td>
</tr>
<tr class="row-even"><td>
jal offset
</td>
<td>
jal x1, offset
</td>
<td>
无条件跳转,地址缺省保存在x1
</td>
</tr>
<tr class="row-odd"><td>
jr rs
</td>
<td>
jalr x0, 0 (rs)
</td>
<td>
无条件跳转到rs寄存器,不保存地址
</td>
</tr>
<tr class="row-even"><td>
jalr rs
</td>
<td>
jalr x1, 0 (rs)
</td>
<td>
无条件跳转到rs寄存器,地址缺省保存在x1
</td>
</tr>
<tr class="row-odd"><td>
ret
</td>
<td>
jalr x0, 0 (x1)
</td>
<td>
函数调用返回
</td>
</tr>
<tr class="row-even"><td>
call offset
</td>
<td><div class="line-block">
<div class="line">aupic x1, offset[32:12]+offset[11]</div>
<div class="line">jalr x1, offset[11:0] (x1)</div>
</div>
</td>
<td>
调用远程子函数
</td>
</tr>
<tr class="row-odd"><td>
la rd, symbol
</td>
<td><div class="line-block">
<div class="line">aupic rd, delta[32:12]+delta[11]</div>
<div class="line">addi rd, rd, delta[11:0]</div>
</div>
</td>
<td>
载入全局地址,其中detla是PC和全局符号地址的差
</td>
</tr>
<tr class="row-even"><td>
lla rd, symbol
</td>
<td><div class="line-block">
<div class="line">aupic rd, delta[32:12]+delta[11]</div>
<div class="line">addi rd, rd, delta[11:0]</div>
</div>
</td>
<td>
载入局部地址,其中detla是PC和局部符号地址的差
</td>
</tr>
<tr class="row-odd"><td>
l{b|h|w} rd, symbol
</td>
<td><div class="line-block">
<div class="line">aupic rd, delta[32:12]+delta[11]</div>
<div class="line">l{b|h|w} rd, delta[11:0] (rd)</div>
</div>
</td>
<td>
载入全局变量
</td>
</tr>
<tr class="row-even"><td>
s{b|h|w} rd, symbol, rt
</td>
<td><div class="line-block">
<div class="line">aupic rd, delta[32:12]+delta[11]</div>
<div class="line">s{b|h|w} rd, delta[11:0] (rt)</div>
</div>
</td>
<td>
载入局部变量
</td>
</tr>
</tbody>
</table>
</section>
</section>
<section id="id8">
# 单周期电路设计
在了解了RV32I指令集的指令体系结构(Instruction Set Architecture,ISA)之后,我们将着手设计CPU的微架构(micro architecture)。
同样的一套指令体系结构可以用完全不同的微架构来实现。不同的微架构在实现的时候只要保证程序员可见的状态,即PC、通用寄存器和内存等,在指令执行过程中遵守ISA中的规定即可。具体微架构的实现可以自由发挥。
在本实验中,我们首先来实现单周期CPU的微架构。所谓单周期CPU是指CPU在每一个时钟周期中需要完成一条指令的所有操作,即每个时钟周期完成一条指令。
每条指令的执行过程一般需要以下几个步骤:
> <div>
>
> 1. **取指** :使用本周期新的PC从指令存储器中取出指令,并将其放入指令寄存器(IR)中
>
> 2. **译码** :对取出的指令进行分析,生成本周期执行指令所需的控制信号,并计算下一条指令的地址,从寄存器堆中读取寄存器操作数,并完成立即数的生成
>
>
> 3. **运算** :利用ALU对操作数进行必要的运算
>
> 4. **访存** :包括读取或写入内存对应地址的内容
>
> 5. **写回** :将最终结果写回到目的寄存器中
> </div>
**每条指令逻辑上是分为 *5* 个阶段执行,但实际上在一个时钟周期,这 *5* 个步骤是同时发生的。**
每条指令执行过程中的以上几个步骤需要CPU的控制通路和数据通路配合来完成。
其中控制通路主要负责控制信号的生成,通过控制信号来指示数据通路完成具体的数据操作。
数据通路是具体完成数据存取、运算的部件。
控制通路和数据通路分离的开发模式在数字系统中经常可以碰到。其设计的基本指导原则是:控制通路要足够灵活,并能够方便地修改和添加功能,控制通路的性能和时延往往不是优化重点。
反过来,数据通路需要简单且性能强大。数据通路需要以可靠的方案,快速地移动和操作大量数据。
在一个简单且性能强大的数据通路支持下,控制通路可以灵活地通过控制信号的组合来实现各种不同的应用。
图提供了RV32I单周期CPU的参考设计。下面我们就针对该CPU的控制通路和数据通路来分别进行说明
**有改动的地方,仅作大致参考**

## PC生成
程序计数器 *PC* 控制了整个 *CPU* 指令执行的顺序。在顺序执行的条件下,下一周期的 *PC* 为本周期 *PC+4* 。如果发生跳转,PC将会变成跳转目标地址。
本设计中每个时钟周期是以时钟信号 *CLK* 的上升沿为起点的。在上一周期结束前,利用组合逻辑电路生成本周期将要执行的指令的地址 *NextPC* 。
在时钟上升沿到达时,将 *NextPC* 同时加载到 *PC* 寄存器和指令存储器的地址缓冲中去,完成本周期指令执行的第一步。
*NextPC* 的计算涉及到指令译码和跳转分析,后续在 **跳转控制** 节中会详细描述。
在系统 *reset* 或刚刚上电时,可以将 *PC* 设置为固定的地址,如全零,让系统从特定的启动代码开始执行。
## 指令存储器
指令寄存器 *Instruction Memory* 专门用来存放指令。虽然在冯诺伊曼结构中指令和数据是存放在统一的存储器中,但大多数现代 *CPU* 是将指令缓存和数据缓存分开的。在本实验中我们也将指令和数据分开存储。
本实验中的指令存储器类似 *CPU* 中的指令缓存。本设计采用时钟上升沿来对指令存储器进行读取操作,指令存储器的读取地址是 *PC*。
指令存储器只需要支持读操作,由于指令存储器每次总是读取 *4* 个字节,所以可以将存储器的每个单元大小设置为 *32bit*。
## 指令译码及立即数生成
在读取出本周期的指令 *instr[31:0]* 之后,*CPU* 将对 *32bit* 的指令进行译码,并产生各个指令对应的立即数。
RV32I的指令比较规整,所以可以直接取指令对应的bit做为译码结果:
<div class="highlight-Verilog notranslate"><div class="highlight"><pre><span></span><span class="k">assign</span><span class="w"> </span><span class="n">op</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">6</span><span class="o">:</span><span class="mh">0</span><span class="p">];</span>
<span class="k">assign</span><span class="w"> </span><span class="n">rs1</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">19</span><span class="o">:</span><span class="mh">15</span><span class="p">];</span>
<span class="k">assign</span><span class="w"> </span><span class="n">rs2</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">24</span><span class="o">:</span><span class="mh">20</span><span class="p">];</span>
<span class="k">assign</span><span class="w"> </span><span class="n">rd</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">11</span><span class="o">:</span><span class="mh">7</span><span class="p">];</span>
<span class="k">assign</span><span class="w"> </span><span class="n">func3</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">14</span><span class="o">:</span><span class="mh">12</span><span class="p">];</span>
<span class="k">assign</span><span class="w"> </span><span class="n">func7</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="o">:</span><span class="mh">25</span><span class="p">];</span>
</pre></div>
</div>
同样的,也可以利用立即数生成器 *imm Generator* 生成所有的立即数。注意,所有立即数均是符号扩展,且符号位总是 *instr[31]* :
<div class="highlight-Verilog notranslate"><div class="highlight"><pre><span></span><span class="k">assign</span><span class="w"> </span><span class="n">immI</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{{</span><span class="mh">20</span><span class="p">{</span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="p">]}},</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="o">:</span><span class="mh">20</span><span class="p">]};</span>
<span class="k">assign</span><span class="w"> </span><span class="n">immU</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{</span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="o">:</span><span class="mh">12</span><span class="p">],</span><span class="w"> </span><span class="mh">12</span><span class="mb">'b0</span><span class="p">};</span>
<span class="k">assign</span><span class="w"> </span><span class="n">immS</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{{</span><span class="mh">20</span><span class="p">{</span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="p">]}},</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="o">:</span><span class="mh">25</span><span class="p">],</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">11</span><span class="o">:</span><span class="mh">7</span><span class="p">]};</span>
<span class="k">assign</span><span class="w"> </span><span class="n">immB</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{{</span><span class="mh">20</span><span class="p">{</span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="p">]}},</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">7</span><span class="p">],</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">30</span><span class="o">:</span><span class="mh">25</span><span class="p">],</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">11</span><span class="o">:</span><span class="mh">8</span><span class="p">],</span><span class="w"> </span><span class="mh">1</span><span class="mb">'b0</span><span class="p">};</span>
<span class="k">assign</span><span class="w"> </span><span class="n">immJ</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{{</span><span class="mh">12</span><span class="p">{</span><span class="n">instr</span><span class="p">[</span><span class="mh">31</span><span class="p">]}},</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">19</span><span class="o">:</span><span class="mh">12</span><span class="p">],</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">20</span><span class="p">],</span><span class="w"> </span><span class="n">instr</span><span class="p">[</span><span class="mh">30</span><span class="o">:</span><span class="mh">21</span><span class="p">],</span><span class="w"> </span><span class="mh">1</span><span class="mb">'b0</span><span class="p">};</span>
</pre></div>
</div>
在生成各类指令的立即数之后,根据控制信号 *ExtOP* ,通过多路选择器来选择立即数生成器最终输出的 *imm* 是以上五种类型中的哪一个。
<table class="docutils align-default" id="tab-extop">
<caption><span class="caption-number"></span><span class="caption-text">控制信号ExtOP的含义</span></caption>
<thead>
<tr class="row-odd"><th class="head">
ExtOP
</th>
<th class="head">
立即数类型
</th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td>
000
</td>
<td>
immI
</td>
</tr>
<tr class="row-odd"><td>
001
</td>
<td>
immU
</td>
</tr>
<tr class="row-even"><td>
010
</td>
<td>
immS
</td>
</tr>
<tr class="row-odd"><td>
011
</td>
<td>
immB
</td>
</tr>
<tr class="row-even"><td>
100
</td>
<td>
immJ
</td>
</tr>
</tbody>
</table>
## 控制信号生成
在确定指令类型后,需要生成每个指令对应的控制信号,来控制数据通路部件进行对应的动作。控制信号生产部件 *Control Signal Generator* 是根据 *instr* 中的操作码 *opcode* ,及 *func3* 和 *func7* 来生成对应的控制信号的。
生成的控制信号具体包括:
* **extOP** :宽度为3bit,选择立即数产生器的输出类型。
* **write_reg** :宽度为1bit,控制是否对寄存器rd进行写回,为1时写回寄存器。
* **rs1Data_EX_PC** :宽度为1bit,选择ALU输入端A的来源。为0时选择rs1,为1时选择PC。
* **rs2Data_EX_imm32_4** :宽度为2bit,选择ALU输入端B的来源。为00时选择rs2,为01时选择imm(当是立即数移位指令时,只有低5位有效),为10时选择常数4(用于跳转时计算返回地址PC+4)。
* **aluc** :宽度为5bit,选择ALU执行的操作。
* **pcImm_NEXTPC_rs1Imm** :宽度为2bit,无条件跳转信号,01时选择pc + imm,10时选择rs1Data + imm。
* **aluOut_WB_memOut** :宽度为1bit,选择寄存器rd写回数据来源,为0时选择ALU输出,为1时选择数据存储器输出。
* **write_mem** :宽度为2bit,控制是否对数据存储器进行写入,为01时按字写回存储器,10时按半字写回存储器,11时按字节写回存储器。
* **read_mem** :宽度为3bit,控制数据存储器读格式。最高位为1时带符号读,为0时无符号读。低二位为00时直接返回32'b0,为01时为4字节读(无需判断符号),为10时为2字节读,为11时为1字节读。
这些控制信号控制各个数据通路部件按指令的要求进行对应的操作。
根据这些控制信号可以得出系统在给定指令下的一个周期内所需要做的具体操作。
如果没有学习过组成原理的同学请参考相关教科书,分析各类指令在给定控制信号下的数据通路的具体操作。这里只进行一个简略的说明:
* **lui** : ALU A输入端不用,B输入端为立即数,按U-Type扩展,ALU执行的操作是拷贝B输入,ALU结果写回到rd。
* **auipc** :ALU A输入端为PC,B输入端为立即数,按U-Type扩展,ALU执行的操作是加法,ALU结果写回到rd。
* **立即数运算类指令** :ALU A输入端为rs1,B输入端为立即数,按I-Type扩展。ALU按ALUctr进行操作,ALU结果写回rd。
* **寄存器运算类指令** :ALU A输入端为rs1,B输入端为rs2。ALU按ALUctr进行操作,ALU结果写回rd。
* **jal** : ALU A输入端PC,B输入端为常数4,ALU执行的操作是计算PC+4,ALU结果写回到rd。PC计算通过专用加法器,计算PC+imm,imm为J-Type扩展。
* **jalr** : ALU A输入端PC,B输入端为常数4,ALU执行的操作是计算PC+4,ALU结果写回到rd。PC计算通过专用加法器,计算rs1+imm,imm为I-Type扩展。
* **Branch类指令** : ALU A输入端为rs1,B输入端为rs2,ALU执行的操作是比较大小或判零,根据ALU标志位选择NextPC,可能是PC+4或PC+imm,imm为B-Type扩展,PC计算由专用加法器完成。不写回寄存器。
* **Load类指令** :ALU A输入端为rs1,B输入端为立即数,按I-Type扩展。ALU做加法计算地址,读取内存,读取内存方式由存储器具体执行,内存输出写回rd。
* **Store类指令** :ALU A输入端为rs1,B输入端为立即数,按S-Type扩展。ALU做加法计算地址,将rs2内容写入内存,不写回寄存器。
</section>
<section id="id14">
## NextPC
代码执行过程中,NextPC可能会有多种可能性:
> <div>
>
> * 顺序执行:NextPC = PC + 4;
>
> * jal: NextPC = PC + imm;
>
> * jalr: NextPC = rs1Data + imm;
>
> * 条件跳转: 根据ALU的condition_branch来判断,NextPC可能是PC + 4或者 PC + imm;
> </div>
这里就直接贴出跳转的代码
module next_pc(
input [1: 0] pcImm_NEXTPC_rs1Imm,
input condition_branch,
input [31: 0] pc, offset, rs1Data,
output reg [31: 0] next_pc
);
always @(*) begin
if(pcImm_NEXTPC_rs1Imm == 2'b01) next_pc = pc + offset;
else if(pcImm_NEXTPC_rs1Imm == 2'b10) next_pc = rs1Data + offset;
else if(condition_branch) next_pc = pc + offset;
else if(pc == 32'h6c) next_pc = 32'h6c; // CPU空转
else next_pc = pc + 4;
end
endmodule
可以发现其中的 *condition_branch* 信号不是由控制器给出,因为条件分支是否要跳转是要由 *ALU* 给出
## 寄存器堆
寄存器堆是CPU中用于存放指令执行过程中临时数据的存储单元。
我们将要实现的RISC-V的基础版本CPU RV32I具有32个寄存器。
RV32I采用 [Load Store](https://en.wikipedia.org/wiki/Load%E2%80%93store_architecture) 架构,即所有数据都需要先用Load语句从内存中读取到寄存器里才可以进行算术和逻辑操作。
因此,RV32I有32个通用寄存器,且每条算术运算可能要同时读取两个源寄存器并写入一个目标寄存器。
RV32I共32个32bit的通用寄存器x0~x31(寄存器地址为5bit编码),其中寄存器x0中的内容总是0,无法改变。

描述了寄存器堆的接口,该寄存器堆中有32个32bit的寄存器。
寄存器堆需要支持同时两个读操作和一个写操作。因此需要有2个读地址Ra和Rb,分别对应RISC-V汇编中的rs1和rs2。写地址为Rw,对应rd。地址均是5位。
写入数据busW为32位,写入有效控制为一位高电平有效的RegWr信号。
寄存器堆的输出是2个32位的寄存器数据,分别是busA和busB。
寄存器堆有一个控制写入的时钟WrClk。
在时序上我们可以让读取是非同步的,即地址改变立刻输出。写入可以在时钟下降沿写入。
这里注意如果同时读取和写入同一个寄存器,我们默认是读取的旧值.
注意,寄存器 *x0* 需要特殊处理,不论何时都是全零。
## ALU
ALU是CPU中的核心数据通路部件之一,它主要完成CPU中需要进行的算术逻辑运算。
这里直接贴出ALU的代码
module alu(
input[4: 0] aluc,
input [31: 0] a, b,
output reg [31: 0] out,
output reg condition_branch
);
always @(*) begin
condition_branch = 0;
out = 32'b0;
case (aluc)
5'b00000: out = a + b;
5'b00001: out = a - b;
5'b00010: out = a & b;
5'b00011: out = a | b;
5'b00100: out = a ^ b;
5'b00101: out = a << b;
5'b00110: out = ($signed(a) < ($signed(b))) ? 32'b1 : 32'b0;
5'b00111: out = (a < b) ? 32'b1 : 32'b0;
5'b01000: out = a >> b;
5'b01001: out = ($signed(a)) >>> b;
5'b01010: begin
out = a + b;
out[0] = 1'b0;
end
5'b01011: condition_branch = (a == b) ? 1'b1 : 1'b0;
5'b01100: condition_branch = (a != b) ? 1'b1 : 1'b0;
5'b01101: condition_branch = ($signed(a) < $signed(b)) ? 1'b1 : 1'b0;
5'b01110: condition_branch = ($signed(a) >= $signed(b)) ? 1'b1 : 1'b0;
5'b01111: condition_branch = (a < b) ? 1'b1: 1'b0;
5'b10000: condition_branch = (a >= b) ? 1'b1: 1'b0;
default: out = 32'b0;
endcase
end
endmodule
## 数据存储器
数据存储器在 *CPU* 运行中存储全局变量、堆栈等数据。我们建议大家实现至少 *128* kB大小的数据存储器容量。
并且,该数据存储器需要支持在沿上进行读取和写入操作。*RV32I* 的字长是 *32* bit,但是,数据存储器不仅要支持 *32* bit数据的存取,同时也需要支持按字节( *8* bit)或半字( *16* bit)大小的读取。
## 单周期CPU的时序设计
在单周期 *CPU* 中,所有操作均需要在一个周期内完成。其中单周期存储部件的读写是时序设计的关键。在 *CPU* 架构中 *PC* 、寄存器堆,指令存储器和数据存储器都是状态部件,需要用寄存器或存储器来实现。
对指令存储器和数据存储器来说,一般系统至少需要数百KB的容量。此时建议用时钟沿来控制读写。假定我们是以时钟上升延为每个时钟周期的开始。对于存储器和寄存器的写入统一安排在上升沿上进行。
> <div>
>
> * 周期开始的上升沿将 *next_pc* 写入 *PC* 寄存器,*pc* 读取指令存储器。
>
> * 指令读出后将出现在指令存储器的输出端,该信号可以通过组合逻辑来进行指令译码,产生控制信号,寄存器读写地址及立即数等。
>
> * 寄存器读地址产生后,直接通过非同步读取方式,读取两个源寄存器的数据,与立即数操作数一起准备好,进入ALU的输入端。
>
> * ALU也是组合逻辑电路,在输入端数据准备好后就开始运算。
>
> * 数据存储器的读地址如果准备好了,就可以在上升沿进行内存读取操作。
>
> * 最后,同时对目的寄存器和数据存储器进行写入操作。这样下一周期这些存储器中的数据就是最新的了。
> </div>
## 模块化设计
CPU设计过程中需要多个不同的模块分工协作,建议在开始编码前划分好具体模块的功能和接口。对于模块划分提供了以下的参考建议。
* **CPU模块** :顶层模块,主要对外接口包括时钟、Reset、指令存储器的地址/数据线、数据存储器的地址及数据线和自行设计的调试信号。
* **PC模块** :当前指令地址
* **NextPC模块** : 给出下一个指令地址
* **指令存储器模块** :主要对外接口包括时钟、地址线和输出数据线。
* **ID模块** :对指令译码
* **控制信号生成模块** :主要对外接口是指令输入及各种控制信号输出。
* **寄存器堆模块** :主要对外接口是读写寄存器号输入、写入数据输入、寄存器控制信号、写入时钟、输出数据。
* **立即数生成器模块** :主要对外接口是指令输入,立即数类型及立即数输出。
* **二路选择器** : 控制信号来选择输出
* **三路选择器** :控制信号来选择输出
* **ALU模块** : 主要对外接口是ALU的输入操作数、ALU控制字、ALU结果输出和标志位输出等。
* **数据存储器模块** :主要对外接口包括时钟、输入输出地址线、输入输出数据、内存访问控制信号和写使能信号。
设计时请先划分模块,确认模块的连接,再单独开发各个模块,建议对模块进行单元测试后再整合系统。
# 五级流水线电路设计
## 概述
从这里开始,我们将尝试设计五级流水线的CPU。
尽管单周期可以正确工作,但是在现代设计中不采取这种方式,因为它的效率实在太低。究其原因,是在单周期设计中时钟周期必须等长。这样处理器中最长的路径决定了时钟周期,这也叫关键路径。这条路径很可能是一条 *load* 指令,他连续使用5个功能单元,指令存储器,寄存器堆,AU,数据存储器和寄存器堆。虽然每个周期仍然能执行一条指令,但由于时钟周期太长,单周期实现的整体性能可能很差
流水线是一种能使多条指令重叠执行的实现技术,目前流水线技术广泛应用。
流水线的矛盾在于,对于一条指令,取指,译码,执行,访存,写回的时间在流水线中并没有缩短,反而增大了。然而对于许多指令来说,流水线更快的原因是所有的工作都在并行的执行。所以单位时间可以完成更多的工作,流水线提高了系统的吞吐率。因此流水线不会缩短一条指令执行的时间,但当有许多条指令需要执行时,吞吐率的提高减少了整个任务的时间
RISC-V指令执行通常包含5个步骤:
* 从指令储存器中取出指令
* 译码指令并读寄存器
* 执行操作并计算地址
* 访问数据存储器(如有必要)
* 将结果写入寄存器(如有必要)
因此,我们的流水线共有五级
流水线技术是一种在顺序指令流中开发指令间并行性的技术。与多处理器编程的优点是它对程序员来说是不可见的
## 基本的流水线
将指令划分成五个阶段意味着五级流水线,还意味着在任意周期里最多执行五条有用的指令。实际上任意周期都会执行五条指令,比如最开始的时候,虽然只有取指阶段正常工作,其余阶段待机,但并不意味着其余阶段什么也不做,它们会执行 *nop* 指令,这条指令的含义是什么也不做。
我们将数据通路划分成五个部分,每个部分用对应的指令执行阶段来命名:
* IF : 取指
* ID : 译码
* EX : 执行
* ME : 访存
* WB : 写回

指令和数据通常随着执行过程从到右依次通过这五个阶段,同时永远不会逆向移动。然而,在从左到右的指令流动过程中存在两个特殊情况:
* 在写回阶段,它将结果写回位于数据通路中译码阶段的寄存器堆中。
* 在选择下一PC值时,在自增PC值和ME阶段的分支地址之间做选择
从右到左的数据流向不会对当前指令产生影响,这种逆向的移动只会影响后续的指令执行。需要注意的是,第一种特殊情况会导致 **数据冒险**,第二种会导致 **控制冒险**。

似乎表明三条指令需要三条独立的数据通路,但事实上,我们可以通过引入寄存器保护数据的方式,来分割数据通路。就像,我们在每两个步骤之间放一个篮子来存放下个阶段所需的数据

当然每条指令都会更新PC,无论是通过自增还是通过将其设置为分支目标地址。PC可以被看作一个流水线寄存器:它给流水线的IF阶段提供数据。不同于图中被标记阴影的流水线寄存器,PC是可见体系结构状态的一部分,在发生例外时,PC中的内容必须被保存,而流水线寄存器中的内容则可以被丢弃。(我们的CPU不涉及中断和异常,因为没有操作系统)
每个阶段所需的数据和控制信号都从流水线寄存器中取得,再将结果送往下个阶段的流水线寄存器。

## 解决数据冒险的流水线
### 前递
上一节展示了流水线的强大功能以及硬件如何通过流水线的方式执行任务。现在从一个更实际的例子出发,看看在程序真正执行的时候会发生什么。
sub x2, x1, x3
and x12, x2, x5
or x13, x6, x2
add x14, x2, x2
sw x15, 100(x2)
后四条指令(and、or、add、sw)都相关于第一条指令 *sub* 中得到的存放在寄存器 *x2* 中的结果。假设寄存器 *x2* 在 *sub* 指令执行之前的值为 *10* ,在执行之后值为 *-20* ,那么程序员希望在后续指令中引用寄存器 *x2* 时得到的值为 *-20* 。

使用简化的数据通路表示具有相关性的五条指令序列中的流水线相关关系。所有的相关操作在图中都以灰线显示,图中顶部的 *CC1* 表示第一个时钟周期。第一条指令向 *x2* 中写入数据,后续的指令都是从 *x2* 中读取数据。*x2* 寄存器在第五个时钟周期时才被写入,因此在第五个时钟周期前正确的寄存器值是不可用的。从数据通路顶部到底部的灰色线表示相关关系。那些导致时间后退的相关就是流水线数据冒险
*CC5* 对同一个寄存器同时读又同时写,这属于是结构冒险,我们需要自己规定这种时候该如何处理。我们规定读操作会得到本周期被写入的值。这种假定与很多寄存器的实现是一致的,在这种情况下不会发生数据冒险
如图所示,在第五个时钟周期之前,对寄存器 *x2* 的读操作并不能返回 *sub* 指令的结果。因此,图中的 *add* 和 *sw* 指令可以得到正确结果 *-20* ,但是 *and* 和 *or* 指令却会得到错误的结果 *-10*。
在第三个时钟周期,也就是 *sub* 指令的 *EX* 阶段结束时就可以得到想要的结果。那么在 *and* 和 *or* 指令中是什么时候才真正需要这个数据呢?答案是在 *and* 和 *or* 指令的 *EX* 阶段开始的时候,分别对应第四和第五个时钟周期。因此,只要可以一得到相应的数据就将其前递给等待该数据的单元,而不是等待其可以从寄存器堆中读取出来,就可以不需要停顿地执行这段指令了。
这意味着,当一个指令试图在 *EX* 阶段使用的寄存器是一个较早的指令在 *WB* 或者 *ME* 阶段要写入的寄存器时,我们需要将写回的数据作为读寄存器值数据的修正。
可以得到两对冒险的条件:
la. EX/MEM.RegisterRd = ID/EX.RegisterRs1
1b. EX/MEM.RegisterRd = ID/EX.RegisterRs2
2a. MEM/WB.RegisterRd = ID/EX.RegisterRs1
2b. MEM/WB.RegisterRd = ID/EX.RegisterRs2
因为并不是所有的指令都会写回寄存器,所以这个策略是不正确的,它有时会在不应该前递的时候也将数据前递出去。一种简单的解决方案是检查 *RegWrite* 信号是否是有效的:检查流水线寄存器在 *ME* 和 *WB* 阶段确定 *RegWrite* 信号是否有效。回忆一下,*RISC-V* 要求每次使用 *x0* 寄存器作为操作数时必须认为该操作值为0。如果流水线中的指令以 *x0* 作为目标寄存器,我们希望避免前递非零作为目标寄存器的情况。只要我们将 *EX/MEM.RegisterRd≠0* 添加到第一类冒险条件,并将 *MEM/WB.RegisterRd≠0* 添加到第二类冒险条件中,就可以使得上述条件正常工作。

如果我们可以从任何流水线寄存器而不仅仅是 *id_ex* 流水线寄存器中得到 *ALU* 的输人,那就可以前递正确的数据。通过在 *ALU* 的输人上添加多选器再辅以适当的控制,就可以在存在数据冒险的情况下全速运行流水线。
在加入前递机制之前, *id_ex* 无需保存 *rs1* 和 *rs2* ,但由于前递机制的需要,现在需要保存 *rs1* 和 *rs2* 添加到 *id_ex* 流水线寄存器中。


一种复杂的潜在数据冒险是在 *WB* 阶段指令的结果, *ME* 阶段指令的结果和 *ALU* 阶段指令的源操作数之间发生的。
例如:
add x1, x1, x2
add x1, x1, x3
add x1, x1, x4
...
在这种情况下,结果应该是来自 *ME* 阶段前递的数据,因为 *ME* 阶段的结果就是最近的结果。因此, *EX/MEM.RegisterRd* 的优先级要高于 *MEM/WB.RegisterRd*

考虑加载指令紧跟存储指令的情况, *sw* 指令在 *ME* 阶段所需要的数据也就是 *rs2Data* 会及时保存在 *lw* 指令的 *me_wb* 寄存器中。为了实现这个操作也需要在 *ME* 阶段加入前递。
### 停顿
当一条指令试图在加载指令写入一个寄存器之后读取这个寄存器时,前递无法解决此处的冒险,此时流水线必须被阻塞以消除这种指令组合带来的冒险。

因此,除了一个前递单元外,还需要一个冒险检测单元。该单元在 *ID* 流水线阶段操作,从而可以在加载指令和相关加载指令结果的指令之间加入一个流水线阻塞。这个单元检测加载指令,冒险控制单元的控制逻辑满足如下条件:
if ( ID/EX. MemRead and ((ID/EX.RegisterRd = IF/ID.RegisterRs1) or (ID/EX.RegisterRd = IF/ID. RegisterRs2)))
stall the pipeline
如果条件成立,指令会停顿一个周期。在一个周期后,前递逻辑就可以处理这个相关并继续执行(如果没有前递,那么还需要再停顿一个时钟周期)
如果处于 *ID* 阶段的指令被停顿了,那么在 *IF* 阶段中的指令也一定要被停顿,否则已经取到的指令就会丢失。只需要简单地禁止 *PC* 寄存器和 *id_id* 流水线寄存器的改变就可以阻止这两条指令的执行。如果这些寄存器被保护,在 *IF* 阶段的指令就会继续使用相同的 *PC* 值取指令,同时在 *ID* 阶段的寄存器就会继续使用 *id_id* 流水线寄存器中相同的字段读寄存器。

我的经验是不要横着看,竖着看。重点关注某一个时钟周期的各个阶段的指令。

流水线控制图概览,图中包括两个前递多选器、一个冒险检测单元和-一个前递单元。尽管简化了 *ID* 和 *EX* 阶段(图中省略了符号扩展立即数和分支逻辑),但是本图还是说明了前递所需的最基本的硬件。
## 解决控制冒险的流水线
迄今为止,我们只将对冒险的关注局限在算术操作和数据传输中。然而,流水线冒险也包括条件分支。图4-59中画出了一个指令序列,并标明在这个流水线中分支是何时发生的。每个时钟周期都必须进行取值操作以维持流水线,不过在我们的设节中所述,这种为了决定正确执行指令所产生的延迟被称为控制冒险中所述,这种为了决定正确执行指令所产生的延迟被称为控制冒险或分支冒险,这与我们之前讨论的数据冒险相对应。

分支指令对流水线的影响。指令左边的数字(40、44等)代表指令的地址。因为分支指令在 *ME* 阶段决定是否跳转(也就是图中 *beq* 指令在第4个时钟周期内的操作),分支指令后续的三条指令都将被取值并且开始执行。如果不进行干预,这三条后续指令会在 *beq* 指令跳转到地址72上的 *lw* 指令之前就开始执行。
本节关于控制冒险的篇幅会短于之前叙述数据冒险的小节。这是因为控制冒险相对更好理解。发生的频率也比数据冒险更低,而且相对于数据冒险,现在还没有解决控制冒险的有效手段,我们采用了更简单的描述方案。
### 假设分支不发生
阻塞流水线直到分支完成的策略非常耗时。一种提升分支阻塞效率的方法是预测条件分支不发生并持续执行顺序指令流。一旦条件分支发生,已经被读取和译码的指令就将被丢弃,流水线继续从分支目标处开始执行。如果条件分支不发生的概率是50%,同时丢弃指令的代价又很小,那么这种优化方式可以减少一半由控制冒险带来的代价。
丢弃指令,意味着我们必须要将流水线中 *IF*,*ID*,*EX* 阶段的指令都清除。
### 缩短分支延迟
一种提升条件分支性能的方式是减少发生分支时所需的代价。到目前为止,我们假定分支所需的下一 *PC* 值在 *ME* 阶段才能被获取,但如果我们将流水线中的条件分支指令提早移动执行,就可以刷新更少的指令。要将分支决定向前移动,需要两个操作提早发生:计算分支目标地址和判断分支条件。其中,将分支地址提前进行计算是相对简单的。在 *if_id* 流水线寄存器中已经得到了 *PC* 值和立即数字段,所以只需将分支地址从 *EX* 阶段移动到 *ID* 阶段即可。当然,分支地址的目标计算将会在所有指令中都执行,但只有在需要时才会被使用。
困难的部分是分支决定本身。对于相等时跳转指令,需要在 *ID* 阶段比较两个寄存器中的值是否相等。相等的判断方法可以是先将相应位进行异或操作,再对结果按位进行或操作。将分支检测移动到 *ID* 阶段还需要额外的前递和冒险检测硬件,因为分支可能依赖还在流水线中的结果,在优化后依然要保证运行正确。例如,为了实现相等时跳转指令(或者不等时跳转指令),需要在 *ID* 阶段将结果前递给相等测试逻辑。这里存在复杂的因素:
因为存在寄存器中数据比较,有关寄存器的就会存在数据冒险。需要在 *id* 阶段额外加入前递和停顿
尽管这很困难,但是将条件分支的执行移动到 *ID* 阶段的确是一个有效的优化,因为这将分支发生时的代价减轻到只有一条指令会被丢弃,也就是 *if_id* 寄存器中的指令更改为 *nop* 指令。
### 本项目的分支预测
遗憾的是,本项目采取的时默认分支不发生这种策略,因为流水深度不算深,丢弃指令的代价还能接受,如果是现代处理器,大多都有20层左右的流水,由于访存偏后面,一下可能会丢弃十几条指令,这种我们是无法接受的,因此现代处理器的分支预测就显得额外重要。我们这两种策略都属于是静态分支预测,更高级的动态分支预测能达到一个很高的准确度。当然现代处理器的分支预测正确率几乎是 *100%* 。
# 测试部分
RISC-V CPU是一个较为复杂的数字系统,在开发过程中需要对每一个环节进行详细的测试才能够保证系统整体的可靠性。
测试部分需要查看波形图,我所用的软件是 **Vivado 2023.1**,不过这玩意简直是纯粹的电子垃圾,压缩包大小有 **100G**,由于我们只需查看波形,用不到太多功能,所以你也可以选择其他软件查看波形。
下载地址:[Vivado 2023.1](https://soc.ustc.edu.cn/Digital/lab0/vivado/)
如果你需要编写 *C* 语言尝试运行,需要 *Linux* 操作系统并安装 *riscv* 指令集所对应的 *gcc* 。因为我们的电脑是 *x86* 指令集,需要交叉编译。
$ sudo apt update
$ sudo apt install build-essential gcc make perl dkms git gcc-riscv64-unknown-elf gdb-multiarch qemu-system-misc
注意请在 *Linux* 上运行,这是我在学习 *riscv* 操作系统时偷来的
传送门:[循序渐进,学习开发一个RISC-V上的操作系统 - 汪辰 - 2021春](https://www.bilibili.com/video/BV1Q5411w7z5)
### 指令测试
在开发过程中,需要首先确保每一个指令都正常工作,因此在完成各个指令控制器代码编写后需要进行对应的测试,我采用的方式是手写汇编,再查看波形是否可以正常工作
例如
inst_mem[0] = {12'h1, 5'b0, 3'b000, 5'b1, 7'b0010011}; // addi
inst_mem[1] = {12'h001, 5'b1, 3'b000, 5'h2, 7'b0010011}; // addi
// R型指令
inst_mem[2] = {7'b0, 5'b1, 5'h2, 3'b000, 5'h3, 7'b0110011}; // add
inst_mem[3] = {7'b010_0000, 5'h2, 5'h3, 3'b000, 5'h4, 7'b0110011}; // sub
...
### 行为仿真
在完成基本指令测试后,可以进行 *CPU* 整体的联调。整体联调的主要目的是验证各个指令基本功能的正确性,可以自己编写简单的 *C* 语言代码编译链接后生成机器代码单步测试。
需要注意的是 *C* 语言的函数调用需要栈,由于我们的代码相对简单,我们就把所有内存空间作为栈,所以我们需要将栈指针初始化。
**reg_file.v**
regs[5'd2] = 32'd128; // 内存只有128字节,非常少。
生成可执行文件,我们还需要一个链接脚本,本来这是可有可无的,但我们需要指令所在的地址从 *0* 开始,方便我们对照。
**script.ld**
ENTRY(_start)
SECTIONS
{
. = 0;
.text :
{
*(.text)
}
.data :
{
*(.data)
}
.bss :
{
*(.bss)
}
}
对于我们的源代码,需要注意的是不要使用标准库,所以这里的 *C* 语言只是一个简易版。
**main.c**
void fun(int* x) {
if(*x == 2) {
*x += 1;
}else {
*x += 10;
}
return;
}
int main() {
int x = 1;
fun(&x);
return 0;
}
生成可执行文件时需要给出相关参数。
**生成可执行文件**
$ riscv64-unknown-elf-gcc -nostdlib -fno-builtin -march=rv32g -mabi=ilp32 -g -Wall main.c -o main.elf -T script.ld
**反汇编生成机器代码**
$ riscv64-unknown-elf-objdump -d main.elf > main.s
**main.s**
main.elf: file format elf32-littleriscv
Disassembly of section .text:
00000000 <fun>:
0: fe010113 addi sp,sp,-32
4: 00812e23 sw s0,28(sp)
8: 02010413 addi s0,sp,32
c: fea42623 sw a0,-20(s0)
10: fec42783 lw a5,-20(s0)
14: 0007a703 lw a4,0(a5)
18: 00200793 li a5,2
1c: 00f71e63 bne a4,a5,38 <fun+0x38>
20: fec42783 lw a5,-20(s0)
24: 0007a783 lw a5,0(a5)
28: 00178713 addi a4,a5,1
2c: fec42783 lw a5,-20(s0)
30: 00e7a023 sw a4,0(a5)
34: 01c0006f j 50 <fun+0x50>
38: fec42783 lw a5,-20(s0)
3c: 0007a783 lw a5,0(a5)
40: 00a78713 addi a4,a5,10
44: fec42783 lw a5,-20(s0)
48: 00e7a023 sw a4,0(a5)
4c: 00000013 nop
50: 01c12403 lw s0,28(sp)
54: 02010113 addi sp,sp,32
58: 00008067 ret
0000005c <main>:
5c: fe010113 addi sp,sp,-32
60: 00112e23 sw ra,28(sp)
64: 00812c23 sw s0,24(sp)
68: 02010413 addi s0,sp,32
6c: 00100793 li a5,1
70: fef42623 sw a5,-20(s0)
74: fec40793 addi a5,s0,-20
78: 00078513 mv a0,a5
7c: f85ff0ef jal ra,0 <fun>
80: 00000793 li a5,0
84: 00078513 mv a0,a5
88: 01c12083 lw ra,28(sp)
8c: 01812403 lw s0,24(sp)
90: 02010113 addi sp,sp,32
94: 00008067 ret
生成机器码后,还需要复制到指令数组里,是个比较麻烦的过程
当然也可以用脚本配合正则表达式即可
# five_pipeline_cpu/sim/script.py
import re
def generate_verilog(mem_file, asm_file):
with open(asm_file, 'r') as asm_f, open(mem_file, 'w') as mem_f:
lines = asm_f.readlines()
index = 0
for line in lines:
line = line.strip()
if not line or line.startswith('file format'):
continue
match = re.match(r'^([0-9a-fA-F]+):\s+([0-9a-fA-F]{8})\s+(.+)$', line)
if match:
instruction = match.group(2) # 机器码部分
comment = match.group(3) if match.group(3) else '' # 指令部分
# 写入
mem_f.write(f"inst_mem[{index}] = 32'h{instruction}; // {comment}\n")
index += 1
else:
# 如果不符合指令格式,打印调试信息
print(f"Invalid instruction format || {line}")
# 文件名
asm_file = 'main.s'
mem_file = 'instruction.text'
# 脚本写入,避免多次复制
generate_verilog(mem_file, asm_file)
### 波形图
有关波形图的部分略过,因为只需要对着波形图查看数据以及控制信号是否正确,是一个比较低级但又麻烦的步骤。
(看得我是头昏脑涨...)
# 参考资料
[Version 1.0 of the Chinese translation of The RISC-V Reader](http://www.riscvbook.com/chinese/)
[南京大学 数字逻辑与计算机组成课程实验](https://nju-projectn.github.io/dlco-lecture-note/exp/11.html)
[中国科学技术大学 计算机系统结构系列实验课程](https://soc.ustc.edu.cn/)
[计算机组成与设计:硬件/软件接口 RISC-V版(原书第2版)]()
[深入理解计算机系统(原书第3版)]()
|
https://github.com/Wack0/maciNTosh
|
maciNTosh
PowerPC Windows NT ported to Power Macintosh systems
Languages: C (97.6%), Assembly (1.8%)
OldWorldIsoBuilder
OldWorldIsoBuilder
arcgrackle
arcgrackle
arcloader_grackle
arcloader_grackle
arcloader_unin
arcloader_unin
arcloaderold_grackle
arcloaderold_grackle
...
COPYING
COPYING
README.md
README.md
> README.md
# Windows NT for Power Macintosh
This repository currently contains the source code for the ARC firmware and its loader, targeting Power Macintosh systems using the *Gossamer* architecture (that is, MPC106 "Grackle" memory controller and PCI host, and "Heathrow" or "Paddington" super-I/O chip on the PCI bus). That is, the following systems:
* Power Macintosh G3 (beige)
* Macintosh PowerBook G3 Series *"Wallstreet"*, *"PDQ"*
* iMac G3 (tray-loading)
* Power Macintosh G3 (Blue & White) *"Yosemite"*
* Macintosh PowerBook G3 Bronze Keyboard *"Lombard"*
* Power Macintosh G4 PCI *"Yikes!"*
The repository additionally contains the source code for the ARC firmware and its loader, targeting PowerPC Macintosh systems using the *Mac99* architecture (the first iteration of which being the "Uni-North" memory controller and PCI host, and "KeyLargo" super-I/O chip on the PCI bus; later derivatives like "Intrepid" are also supported). That is, the following systems:
* PowerBook G3 Firewire *"Pismo"*
* iBook G3
* iBook G4
** The mid-2005 iBook G4 (`PowerBook6,7`) uses a USB mouse internally and therefore mouse will not work yet.
* PowerBook G4
** The early 2005 and later PowerBook G4s (`PowerBook6,8` and `PowerBook5,6` and later) use a USB keyboard and mouse and are therefore currently not practically supported.
The following systems are theoretically supported, but currently not practically supported due to the lack of USB drivers:
* iMac G3 (slot-loading)
* iMac G4
* Power Macintosh G4 (AGP *"Sawtooth"* and later)
There may be issues on your hardware.
NT HAL and drivers have no source present for now.
## Drivers present in ARC firmware
* Cuda and PMU
* ADB keyboard
* Flat 32bpp video framebuffer, set up by the loader. Both ATI and nVidia hardware is supported, although some nVidia GPUs do not currently work.
* Mac I/O internal IDE controllers, forked from OpenBIOS (**there are no drivers for PCI IDE controllers!**)
** The ATA-6 controllers used on some later Mac99 systems (Intrepid, U2) are supported. Please note LBA48 is not yet supported.
* On pre-Mac99 systems, MESH SCSI controller.
* USB OHCI forked from OpenBIOS (**on pre-Mac99 systems, broken, nonworking, and initialisation code commented out**)
## Drivers currently done for NT
* HAL for *Gossamer* chipset, including: NT boot time framebuffer, super I/O interrupt controller, Grackle PCI bus support, Cuda and PMU (including low level ADB), serial port for kernel debugging only
* HAL for *Mac99* chipset, including: NT boot time framebuffer, MPIC interrupt controller, support for all 3 PCI busses on Uni-North (of which one is AGP, but only the PCI subset is supported), PMU (including low level ADB), serial port for kernel debugging only
* Mac I/O internal IDE controllers and ATA-6 controllers, forked from `atapi.sys` from NT4 DDK
* General HID/storage driver, intended to also contain a USB stack in future but currently only implements ADB keyboard/mouse and ramdisk as floppy drive for installing drivers at text setup time
* Flat 32bpp video framebuffer miniport driver
## Software compatibility
NT 3.51 RTM and higher. NT 3.51 betas (build 944 and below) will need kernel patches to run due to processor detection bugs. NT 3.5 will never be compatible, as it only supports PowerPC 601.
(The additional suspend/hibernation features in NT 3.51 PMZ could be made compatible in theory but in practise would require all of the additional drivers for that to be reimplemented.)
## Installing
### Preliminary
* Grab binaries for your system from the releases page.
* For Gossamer/Grackle systems, burn the image to optical media. Be sure to use the correct image for your system: use `nt_arcfw_grackle_ow.iso` for an Old World system (PowerMac G3 beige, PowerBook G3 Wallstreet/PDQ) and `nt_arcfw_grackle.iso` for a New World system (iMac G3 tray-loading, PowerMac G3 blue&white, PowerBook G3 Lombard, PowerMac G4 Yikes).
* For Mac99 systems, you can write the image to a USB drive.
### Partitioning Disk
* Boot your PowerMac from the burned optical media.
* For Mac99 laptops, you can boot to Open Firmware and use the command `probe-usb multi-boot` to show the boot menu with USB device present.
* When you get to ARC firmware menu, go to `Run firmware setup`, then `Repartition disk for NT installation`.
* The disk partitioner will first let you enter partition size of the NT partition (up to the 16383x16x63 CHS limit, minus 32 MB ARC system partition + 1 MB for partition tables / MBR backup / OS 9 drivers / ARC environment variable storage, giving a maximum possible size of 8030 MB), then will drop to a menu allowing the creation of additional Mac partitions.
* If you choose an NT partition size over 2GB, the partition will be formatted to NTFS.
* Please be aware that in releases before 2024-11-11, the NTFS version used for formatting is **incompatible with NT 3.51**, so if you want to install NT 3.51, use a partition size that is 2GB or lower.
* After adding a partition to the list, the only way to remove from the list is by cancelling the operation and starting the partitioner again.
* After you have created all Mac partitions you want, choose `Finish partitioning and install`, and confirm the operation.
* When finished, the partitioner will ask to `Press any key to restart`. Do so, and boot your PowerMac from the CD or USB drive again.
### Installing NT
* For Gossamer/Grackle systems, if ARC firmware does not show `drivers.img ramdisk loaded`, go to `Run firmware setup`, then `Load driver ramdisk` - make sure it succeeds before continuing.
* Eject CD and insert your NT 4 or NT 3.51 CD.
* For Mac99 systems, the option to eject the CD is in the `Run firmware setup` menu.
* Go to `Run a program` and enter the path `cd:\ppc\setupldr` - this may be `cd01:` or `cd02:` (...) if you have multiple optical drives present on your system.
* This may error with `The file or device does not exist`, just go back to `Run a program` and try again if so.
* NT setupldr will start.
* You will receive the message `Setup could not determine the type of computer you have`.
* Choose `Other` (default selected option), just press `Enter` when asked for hardware support disk.
* Pick your system from the list - all are equivalent and will load the correct HAL for your system, which is either the Gossamer chipset HAL `halgoss` or the Mac99 chipset HAL `halunin`.
* Next you will receive the message `Setup could not determine the type of one or more mass storage drivers installed in your system`. Two drivers need to be loaded at this point:
* press `S` to pick a driver, choose `Other` from the list, press `Enter` when asked for hardware support disk
* Choose the first driver `Mac I/O IDE Controller`
* follow the previous steps again, but this time choose the second driver `PowerMac General HID & Storage`
* finally, press Enter to continue
* You will receive the message `Setup could not determine the type of video adapter installed in the system`. Choose `Other` from the list, press `Enter` when asked for hardware support disk, and choose the correct option depending on the OS you are installing.
* There are two options in this list; `Open Firmware Frame Buffer` is for NT 4, `Open Firmware Frame Buffer (NT 3.x)` is for NT 3.51.
* NT will boot and text setup will start. Go through the text setup.
* Under `Setup has determined that your computer contains the following hardware and software components`, change `Keyboard` from `Unknown` to `XT, AT or Enhanced Keyboard (83-104 keys)` and `Pointing Device` from `Unknown` to `No Mouse or Other Pointing Device`.
* Choose the `C:` drive from the partition list. If you chose to create an NT partition of size 2GB or less, it must be formatted.
* If you chose to create an NT partition of over 2GB in size, errors will be found by the disk examination process which will require a reboot. You will need to boot back into the ARC firmware from the CD or USB drive and follow the "Installing NT" steps again to get back to this point.
* On the second attempt, disk examination will succeed, so just choose the `C:` partition again in the NT text setup partition selector.
* Proceed through the rest of NT text and graphical setup as normal.
## Known issues (Grackle/Gossamer)
* On a laptop system you may wish to remove the battery. At least on Lombard, the only way to power off the system when it bugchecks is via PMU reset or via total power removal.
* That said, PMU reset on Wallstreet/PDQ is easier, done via keyboard combination.
* Currently the implemented drivers are the bare minimum to run and use NT.
* I have observed PMU hard shutdowns on NT boot, fixed only by a PMU reset. No idea what caused this.
* On Old World systems, if you have trouble booting to something that isn't the ARC firmware, holding `Esc` on boot will cause ARC firmware devices to be skipped.
## Known issues (Mac99)
* As USB drivers are not working yet, only laptop systems are supported.
* Currently the implemented drivers are the bare minimum to run and use NT.
## Dualboot quirks
If you create additional Mac partitions, please make note of the following:
* The Mac partitions are listed in the partition table as HFS partitions but are not formatted. Use Disk Utility from OS X 10.1 or above to format the partitions. (Erase the **volumes**, not the **drive**!)
* For releases after 2024-11-11 you can now also boot into OS 9, which will show dialogs for formatting every unformatted partition on startup.
* The OS X installer, and just booting OS 8/OS 9, will error if a valid MBR is present on the disk at all, which is required for NT. In ARC firmware, go to `Run firmware setup` then `Reboot to OSX install or OS8/OS9` if you wish to boot to those listed operating systems.
* For releases after 2024-11-11 ARC firmware now patches OS8/9 driver code when writing to disk such that booting to OS8/9 does not need this option, however if the on-disk driver partitions are updated by any means this will be required again.
* Booting back to the ARC firmware will fix the MBR, so be sure to always use this option when unsure.
* In particular, formatting the created HFS partitions in OS X 10.2 and 10.3 will not work when a valid MBR is present!
## Building ARC firmware
You need devkitPPC. Additionally, a `libgcc.a` compiled for `powerpcle` must be present in `arcgrackle/gccle`. If you need to find one, it should be present on any Void Linux mirror, the current filename to search for as of 2024-07-12 is `cross-powerpcle-linux-gnu-0.34_1.x86_64.xbps` - decompress it by `zstdcat cross-powerpcle-linux-gnu-0.34_1.x86_64.xbps -o cross-powerpcle-linux-gnu-0.34_1.x86_64.tar`, then pull the file out of the tarball: `usr/lib/gcc/powerpcle-linux-gnu/10.2/libgcc.a`.
* Ensure `DEVKITPPC` environment variable is set to your devkitPPC directory, usually `/opt/devkitpro/devkitPPC`
* Build the big endian libc: `cd baselibc ; make ; cd ..`
* Build the ARC firmware loader: `cd arcloader_grackle ; make ; cd ..`
* For Mac99, use the `arcloader_unin` folder instead.
* Build the little endian libc: `cd arcgrackle/baselibc ; make ; cd ../..`
* For Mac99, use the `arcunin/baselibc` folder instead.
* Build the ARC firmware itself: `cd arcgrackle ; make ; cd ..`
* For Mac99, use the `arcunin` folder instead.
Replace `stage1.elf` and `stage2.elf` inside the release image. For recreating the image from a folder dump, use your preferred tool to create a hybrid HFS+ISO image, make sure `System` folder is blessed and `BootX` file is of type `tbxi`.
Please note that `stage1.elf` must not be larger than 16KB and `stage2.elf` must not be larger than 224KB.
For building the Old World bootloader, see its readme, for creating an Old World ISO image, see OldWorldIsoBuilder.
## Acknowledgements
* libc used is [baselibc](https://github.com/PetteriAimonen/Baselibc)
* ELF loader and makefiles adapted from [The Homebrew Channel](https://github.com/fail0verflow/hbc)
* Some lowlevel powerpc stuff, and ARC firmware framebuffer console implementation and font, adapted from [libogc](https://github.com/devkitPro/libogc)
* Some ARC firmware drivers (IDE, USB) adapted from [OpenBIOS](https://github.com/openbios/openbios)
* USB drivers in OpenBIOS were themselves adapted from [coreboot](https://github.com/coreboot/coreboot)
* ISO9660 FS implementation inside ARC firmware is [lib9660](https://github.com/erincandescent/lib9660) with some modifications.
* FAT FS implementation inside ARC firmware is [Petit FatFs](http://elm-chan.org/fsw/ff/00index_p.html) with some modifications.
|
https://github.com/microsoft/nnscaler
|
nnscaler
nnScaler: Compiling DNN models for Parallel Training
Languages: Python (98.8%), C++ (1.2%)
docs
docs
examples
examples
nnscaler
nnscaler
tests
tests
...
.gitignore
.gitignore
.gitmodules
.gitmodules
.readthedocs.yaml
.readthedocs.yaml
CODE_OF_CONDUCT.md
CODE_OF_CONDUCT.md
LICENSE
LICENSE
> README.md
<img src="docs/source/images/nnScaler-c-1.png" alt="drawing" width="100" align="left"/>
nnScaler: Compiling DNN models for Parallel Training over Multiple Devices
==============
# What is nnScaler?
---------
nnScaler is a parallelization engine that compiles a Deep neural network (DNN) model that designed for single-GPU execution into a program that capable of running in parallel across multiple GPUs.
<img src="docs/source/images/nnScaler_flow.png" alt="drawing" width="600"/>
# Latest News
nnScaler (also known as CUBE as code name) has been adopted by multiple product and research projects, this section includes some of the latest news from the team and partner projects.
* **2025-02-12** nnScaler 0.7 released: https://github.com/microsoft/nnscaler/releases/tag/0.7
* **2024-10-07** Diff-Transformer utilizes nnScaler for differential attention mechanism: [DIFFERENTIAL TRANSFORMER](https://arxiv.org/abs/2410.05258)
* **2024-05-09** YOCO utilizes nnScaler for long-sequence training: [(YOCO)You only cache once: Decoder-decoder architectures for language models](https://arxiv.org/abs/2405.05254)
* **2024-04-22** Post training for the long context version of [Phi-3 series](https://arxiv.org/abs/2404.14219)
* **2024-02-21** LongRoPE utilizes nnScaler to reduce both the training and inference costs: [LongRoPE: Extending LLM context window beyond 2 million tokens](https://arxiv.org/abs/2402.13753)
### System Highlights:
* Ease of Use: Only a few lines of code need to be changed to enable automated parallelization.
* Pythonic: The parallelization output is in PyTorch code, making it easy for users to understand and convenient for further development or customization.
* Extensibility: nnScaler exposes an API to support new operators for emerging models.
* Reliability: Verified through various end-to-end training sessions, nnScaler is a dependable system.
* Performance: By exploring a large parallelization space, nnScaler can significantly enhance parallel training performance.
For **_DNN scientists_**, they can concentrate on model design with PyTorch on single GPU, while leaving parallelization complexities to nnScaler. It introduces innovative parallelism techniques that surpass existing methods in performance. Additionally, nnScaler supports the extension of DNN modules with new structures or execution patterns, enabling users to parallelize their custom DNN models.
For **_DNN system experts_**, they can leverage nnScaler to explore new DNN parallelization mechanisms and policies for emerging models. By providing user-defined functions for new operators not recognized by nnScaler, it ensures seamless parallelization of novel DNN models. For example, to facilitate long sequence support in LLMs.
# Quick start
---------
## Installation
### Prerequisite
Install the following packages before the installation of nnScaler:
Python >= 3.9, < 3.11 (3.10 is recommanded)
PyTorch >= 2.0, < 2.4 (2.2.0 is recommanded)
### Install nnScaler from source
Execute below commands in nnScaler directory:
pip install -r requirements.txt
pip install -e .
Besides, to avoid *cppimport* error, it also needs to include nnScaler directory in environment variable **PYTHONPATH**:
export NNSCALER_HOME=$(pwd)
export PYTHONPATH=${NNSCALER_HOME}:$PYTHONPATH
[//]: # (Reference output: Successfully installed MarkupSafe-2.1.5 contourpy-1.3.0 cppimport-22.8.2 cycler-0.12.1 dill-0.3.8 filelock-3.15.4 fonttools-4.53.1 fsspec-2024.6.1 importlib-resources-6.4.4 jinja2-3.1.4 kiwisolver-1.4.5 mako-1.3.5 matplotlib-3.9.2 more-itertools-10.4.0 mpmath-1.3.0 networkx-3.3 numpy-2.1.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.6.68 nvidia-nvtx-cu12-12.1.105 packaging-24.1 pillow-10.4.0 psutil-6.0.0 pulp-2.9.0 pybind11-2.13.5 pyparsing-3.1.4 python-dateutil-2.9.0.post0 pyyaml-6.0.2 six-1.16.0 sympy-1.13.2 torch-2.4.0 tqdm-4.66.5 triton-3.0.0 typing-extensions-4.12.2)
## Example Llama-3
### Prerequisite for Llama-3
Install packages required to run Llama-3. Besides, a certain version of CUDA library is needed during flash-attn installation. For example, [CUDA V11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) is needed if using PyTorch 2.20.
python -m pip install transformers==4.40.0 flash-attn==2.5.5 tensorboard
### Model Access
Obtain access of Llama-3 model from [HuggingFace](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), where you will receive an access token which should be set as an environment variable:
export HF_TOKEN=<HUGGINGFACE_ACCESS_TOKEN>
### Code Changes for Parallelization
You can find all the example code at `examples/llama`. As shown below, a user needs to:
* Wrap the Model: Include loss computation and other necessary components.
* Configure Components: Set up the model, optimizer, and dataloader.
* Initialize and Start: In the main function, create an nnScaler trainer with the above configurations and start the training process.
```python
# import the nnScaler build-in parallelization-capable trainer
from nnscaler.cli.trainer import Trainer
# wrap model to include loss computing, etc.
class WrapperModel(torch.nn.Module):
def __init__(self, model_id):
super().__init__()
self.model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation='flash_attention_2')
def forward(self, samples):
outputs = self.model.model(
input_ids=samples['net_input']['src_tokens'],
use_cache=False,
return_dict=False,
)
loss = torch.sum(chunk_linear_cross_entropy(outputs[0], self.model.lm_head.weight, samples['target'], ...))
return loss, samples['ntokens'], samples['nsentences']
def main(args):
# data config
dataloader_config = ...
# model config
model_config = ModelConfig(
type=WrapperModel,
args={
'model_id': args.model_id,
},
)
# optimizer hyperparameters
optimizer_config = OptimizerConfig(
type=MixedPrecisionAdamW,
args={'lr': 2e-5, 'betas': (0.9, 0.95), 'weight_decay': 0.0, 'fused': True},
#...
)
#...
# setup trainer with configs of dataloader/model/optimizer, etc.
trainer = Trainer(train_args=TrainerArgs(
#...
model=model_config,
optimizer=optimizer_config,
dataloader=dataloader_config,
#...
))
trainer.run()
```
### Run the example Llama-3 training
Then we can start the example, and all the parallelization tasks will be finished by nnScaler automatically.
```shell
cd examples/llama
# prepare training data:
python bookcorpus.py --data_path_or_name bookcorpus/bookcorpus --tokenizer_path_or_name meta-llama/Meta-Llama-3-8B-Instruct --save_path ./bookcorpus_llama3_4K --sequence_length 4096
# build the mini model
python create_mini_model.py --model_id meta-llama/Meta-Llama-3-8B-Instruct --output_id ./llama3_mini
#compile and run using data parallelism + zero1
torchrun --nproc_per_node=2 train.py --plan_ngpus 1 --runtime_ngpus 2 --name llama3_debug --model_id ./llama3_mini --dataset_path ./bookcorpus_llama3_4K
```
## Example nanoGPT
We also provide an example to demonstrate how to parallelize a model through a [PyTorch Lightning](https://lightning.ai/docs/pytorch/stable/)-compatible interface in nnScaler.
* Find the [nanoGPT](https://github.com/karpathy/nanoGPT) example in nnScaler repo:
```shell
cd examples/nanogpt
```
* Install nanoGPT's dependencies:
```shell
pip install -r requirements.txt
```
* Prepare dataset:
```shell
python nanoGPT/data/shakespeare_char/prepare.py
```
* Test with Single GPU
Now you can run ``train_nnscaler.py`` with `torchrun <https://pytorch.org/docs/stable/elastic/run.html>`:
torchrun --nproc_per_node=1 train_nnscaler.py nanoGPT/config/train_shakespeare_char.py
This will train a baby GPT model on a single GPU.
It will take several minutes and the best validation loss will be around 1.47.
* Test with Multi-GPU
By default, nnScaler parallelizes a model over GPUs with _data parallelism_.
If you have 4 GPUs on one node:
torchrun --nproc_per_node=4 train_nnscaler.py nanoGPT/config/train_shakespeare_char.py
Or if you have multiple nodes, for example 2 nodes with 4 GPUs each:
# on each node
torchrun --nnodes=2 --nproc_per_node=4 --rdzv-id=NNSCALER_NANOGPT --rdzv-backend=c10d --rdzv-endpoint=<IP> \
train_nnscaler.py nanoGPT/config/train_shakespeare_char.py
NOTE: The local batch size is fixed by default, so using more workers will result in a larger global batch size.
💡 For advanced usages, please stay tuned for our future release.
# Reference
---------
You may find the Artifact Evaluation for OSDI'24 with the guidance [here](https://github.com/microsoft/nnscaler/tree/osdi24ae).
Please cite nnScaler in your publications if it helps your research:
@inproceedings{lin2024nnscaler,
title = {nnScaler: Constraint-Guided Parallelization Plan Generation for Deep Learning Training},
author={Lin, Zhiqi and Miao, Youshan and Zhang, Quanlu and Yang, Fan and Zhu, Yi and Li, Cheng and Maleki, Saeed and Cao, Xu and Shang, Ning and Yang, Yilei and Xu, Weijiang and Yang, Mao and Zhang, Lintao and Zhou, Lidong},
booktitle={18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24)},
pages={347--363},
year={2024}
}
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos is subject to those third-party's policies.
## Contact
You may find our public repo from <https://github.com/microsoft/nnscaler> or microsoft internal repo <https://aka.ms/ms-nnscaler>.
For any questions or inquiries, please contact us at [nnscaler@service.microsoft.com](mailto:nnscaler@service.microsoft.com).
|
https://github.com/vstyler96/suyu
|
suyu
suyu, pronounced "sue-you" (wink wink) is the continuation of the world's most popular, open-source, Nintendo Switch emulator, yuzu. It is written in C++ with portability in mind, and we actively maintain builds for Windows, Linux and Android.
Languages: C++ (95.5%), Kotlin (2.9%), CMake (0.8%), GLSL (0.3%), NASL (0.2%), Python (0.2%)
.ci
.ci
.github
.github
.reuse
.reuse
CMakeModules
CMakeModules
LICENSES
LICENSES
...
.codespellrc
.codespellrc
.git-blame-ignore-revs
.git-blame-ignore-revs
.gitattributes
.gitattributes
.gitignore
.gitignore
.gitlab-ci.yml
.gitlab-ci.yml
> README.md
<!--
SPDX-FileCopyrightText: 2024 suyu emulator project
SPDX-License-Identifier: GPL v3
-->
**Note**: We do not support or condone piracy in any form. In order to use Suyu, you'll need keys from your real Switch system, and games which you have legally obtained and paid for. We do not intend to make money or profit from this project.
We are in great need of developers. Please join our Discord server below if you can help out with the project.
This repo is based on Yuzu EA 4176. Please contribute if you can!
<hr />
<h1 align="center">
<br>
<a href="https://gitlab.com/suyu-emu/suyu"><img src="dist/readme/suyu__Logo-Pill.svg" alt="suyu" height="128"></a>
<br>
<b>suyu</b>
<br>
</h1>
<h4 align="center"><b>suyu</b>, pronounced "sue-you" (wink wink) is the continuation of the world's most popular, open-source, Nintendo Switch emulator, yuzu.
<br>
It is written in C++ with portability in mind, and we actively maintain builds for Windows, Linux and Android.
</h4>
<p align="center">
<a href="#compatibility">Compatibility</a> |
<a href="#development">Development</a> |
<a href="#building">Building</a> |
<a href="https://gitlab.com/suyu-emu/suyu/-/pipelines">Pipelines</a>
<a href="#downloads">Downloads</a> |
<a href="#support">Support</a> |
<a href="#license">License</a>
</p>
## Status
We are trying to get the builds working. We are in need of developers. Join our Discord to contribute.
**Note**: This README is a fork of the original project's README, most links are broken!
## Compatibility
The emulator is capable of running most commercial games at full speed, provided you meet the [necessary hardware requirements](https://suyu-emu.org/help/quickstart/#hardware-requirements).
For a full list of games suyu supports, please visit our [Compatibility page](https://gitlab.com/suyu-emu/suyu/-/wikis/Compatibility).
Check out our [website](https://suyu.dev) for the latest news on exciting features, monthly progress reports, and more!
## Development
This project is completely free and open source, this project is made possible by many people who share the same interest.
Most of the development happens on GitLab. For development discussion, please join us on [Discord](https://discord.gg/2gQRBp44KT).
If you want to contribute, please take a look at the [Contributor's Guide](https://gitlab.com/suyu-emu/suyu/-/wikis/Contributing) and [Developer Information](https://gitlab.com/suyu-emu/suyu/-/wikis/Developer-Information).
You can also contact any of the developers on Discord in order to know about the current state of the emulator.
## Downloads
* __Windows__: [Legacy Artifacts](https://github.com/pineappleea/pineapple-src/releases)
* __Linux__: [Legacy Artifacts](https://github.com/pineappleea/pineapple-src/releases)
## Building
* __Windows__: [Wiki page](https://gitlab.com/suyu-emu/suyu/-/wikis/Building-for-Windows)
* __Linux__: [Wiki page](https://gitlab.com/suyu-emu/suyu/-/wikis/Building-for-Linux)
## Support
This project is completely free and open source, this project is made possible by many people who share the same interest. Please join the Discord server [here](https://discord.gg/2gQRBp44KT) to contribute.
## License
suyu is licensed under the free and open-source GPL v3 license.
|
https://github.com/Arntzen-Software/parallel-gs
|
parallel-gs
A compute shader emulation of the PlayStation 2 Graphics Synthesizer
Languages: C++ (98.7%)
dump
dump
gs
gs
misc/arch-pkgbuild-pcsx2
misc/arch-pkgbuild-pcsx2
sandbox
sandbox
scripts
scripts
...
.gitignore
.gitignore
.gitmodules
.gitmodules
CMakeLists.txt
CMakeLists.txt
COPYING.LGPLv3
COPYING.LGPLv3
README.md
README.md
> README.md
# paraLLEl-GS
paraLLEl-GS emulates the PlayStation 2 Graphics Synthesizer using Vulkan compute shaders.
It is similar in spirit to paraLLEl-RDP, with different implementation trade-offs.
The end goal is to be a no-compromises PS2 graphics emulation,
i.e., retain the accuracy of a CPU software renderer while supporting upscaling / super-sampling and be
fast enough to do so on modest GPU hardware.
Unfortunately, while N64 has the extremely accurate Angrylion implementation that can be tested against,
I don't believe GSdx's software renderer is bit-wise accurate with real GS hardware, so
paraLLEl-GS currently does not aim for hardware bit-wise accuracy on varying interpolation.
Extremely detailed hardware tests would need to be written to reverse the exact behavior.
To my knowledge, such tests don't exist, at least not publicly.
It is a completely standalone implementation from scratch, and does not use GSdx from PCSX2.
The GS dump format is used to make debugging and triaging issues easier, but that is only relevant for development.
## Features
- 2x / 4x / 8x / 16x SSAA. More than 8x is arguably complete overkill, but it's there.
- Weave de-interlacer (could certainly be better)
- Auto-promotion to progressive scan for FFMD = 0
- CRTC field blending (and ability to turn the blur off)
- AA1 handling
- Lots of mitigation for bad up-sampling behavior
Generally, I tend to prefer super-sampling over straight up-scaling on SD content.
The mixing of SD UI elements and ultra-sharp polygon edges looks quite jarring to me.
Super-sampling also works much better with CRT emulation.
To upscale the anti-aliased content to screen, AMD FSR1 + RCAS can be used, and does a decent job here.
### Known missing features
- AA1 implementation is questionable. There are many details which are unknown to me how it's supposed to work exactly.
## Implementation details
This is best left to blog posts.
## Tested GPU / driver combos.
- RADV on RX 7600/6800.
- AMDVLK on RX 7600/6800.
- amdgpu-pro on RX 7600/6800.
- Steam Deck on SteamOS 3.6.6 / 3.7.
- RTX 4070 on Linux and Windows.
- RTX 3060 mobile on a Windows 11 laptop.
- Intel UHD 620 on Windows 11 (but obviously too slow for it to be practical).
- Arc A770/B580 on Mesa 24.3.1.
- Arc A770/B580 on Windows 10.
## Required driver features
- `descriptorIndexing`
- `timelineSemaphore`
- `storageBuffer8BitAccess`
- `storageBuffer16BitAccess`
- `shaderInt16`
- `scalarBlockLayout`
- Full subgroup support (minus clustered)
- Subgroup size control with full groups between 16 and 64 threads per subgroup
This should not be a problem for any desktop driver or somewhat modern mobile GPU.
## Contributors
- Hans-Kristian "themaister" Arntzen
- Runar Heyer
Runar's contributions were done as paid work for my company Arntzen Software AS as an employee.
He did the early study, studied game behavior, wrote a lot of tests,
implemented most of the PS2-specific details in `ubershader.comp`,
and implemented most of VRAM upload / texture caching shaders.
Most of the non-shader code was rewritten after the initial prototype implementation with a lot of hindsight from that earlier work.
## PCSX2 integration
The PCSX2 integration is early days and very experimental / hacky. An Arch Linux PKGBUILD can be found in `misc/`.
To build with Visual Studio, apply the Git patches manually and checkout parallel-gs in the correct folder (`pcsx2/GS/parallel-gs`).
There is very basic UI integration. As API, paraLLEl-GS can be chosen. The super-sampling rate can be modified.
Under display settings, some options are honored:
- Bilinear filtering. The sharp bilinear option uses FSR1 + RCAS.
- Anti-Blur
- Screen Offsets
- Show Overscan
- Integer Scaling
Save states seem to work, and GS dumps also works.
OSD does *not* work in the current integration.
## Dump format
The primary way to debug paraLLEl-GS is through dumps.
With `parallel-gs-replayer`, a PCSX2 generated dump (current upstream, version 8) can be replayed.
RenderDoc should be attached, and a RenderDoc capture covering the entire dump will be made automatically.
With `parallel-gs-stream`, a raw dump format that skips the header can be used.
See `misc/` for a hacky patch for PCSX2 that allows the use of `mkfifo` to test the renderer in complete isolation in real-time.
E.g.:
```
mkfifo /tmp/gs.stream
parallel-gs-stream /tmp/gs.stream
# Run some game and parallel-gs-stream should start pumping out frames when PCSX2 starts generating GS commands.
GS_STREAM=/tmp/gs.stream pcsx2
```
`parallel-gs-stream` can pause the emulation, step frames, and trigger captures on its own when RenderDoc is attached.
## Debugging
paraLLEl-GS can emit labels and regions which makes it easy to step through primitives being drawn.
The primary debugger is RenderDoc.
## License
The license for current code is LGPLv3+, but dual-licensing may be possible.
## Contributions
External contributions are currently not accepted. This may be relaxed eventually.
This repository is public only to facilitate testing and debug for the time being.
|
https://github.com/W-Ted/GScream
|
GScream
Official code for ECCV2024 paper: GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal
Languages: Python (57.9%), Cuda (33.3%), C++ (8.6%)
arguments
arguments
data
data
gaussian_renderer
gaussian_renderer
generator
generator
log_training
log_training
...
README.md
README.md
gscream.yaml
gscream.yaml
train.py
train.py
> README.md
<p align="center">
<h1 align="center"><strong>GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal</strong></h1>
<h3 align="center">ECCV 2024</h3>
<p align="center">
<a href="https://w-ted.github.io/">Yuxin Wang</a><sup>1</sup>,</span>
<a href="https://wuqianyi.top/">Qianyi Wu</a><sup>2</sup>,
<a href="http://www.cad.zju.edu.cn/home/gfzhang/">Guofeng Zhang</a><sup>3</sup>,
<a href="https://www.danxurgb.net/">Dan Xu</a><sup>1✉️</sup>
<br>
<sup>1</sup>HKUST,
<sup>2</sup>Monash University,
<sup>3</sup>Zhejiang University
</p>
<div align="center">
<a href=https://arxiv.org/abs/2404.13679><img src='https://img.shields.io/badge/arXiv-2404.13679-b31b1b.svg'></a>
<a href='https://w-ted.github.io/publications/gscream/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
</div>
</p>
## Installation
```
git clone https://github.com/W-Ted/GScream.git
cd GScream
conda env create -f gscream.yaml
conda activate gscream
cd submodules/diff-gaussian-rasterization/ && pip install -e .
cd ../simple-knn && pip install -e .
cd ../..
```
Since we used RTX 3090, in the [setup.py](https://github.com/W-Ted/GScream/blob/e7cc71bf3e878d02d15b524fdb44f038eba59a2a/submodules/diff-gaussian-rasterization/setup.py#L29), we hardcoded the gencode=arch with 'compute_86' and 'sm_86' when compiling 'diff-gaussian-rasterization'. For Tesla V100, you may try changing it to 'compute_70' and 'sm_70' before compiling. [issue#4](https://github.com/W-Ted/GScream/issues/4)
## Dataset
We provide the processed SPIN-NeRF dataset with Marigold depths [here(~9.7G)](https://drive.google.com/file/d/1EODx3392p1R7CaX5bazhkDrfrDtnqJXv/view?usp=sharing). You could download it to the ''data'' directory and unzip it.
```
cd data
pip install gdown && gdown 'https://drive.google.com/uc?id=1EODx3392p1R7CaX5bazhkDrfrDtnqJXv'
unzip spinnerf_dataset_processed.zip && cd ..
```
Please refer to [SPIN-NeRF dataset](https://drive.google.com/drive/folders/1N7D4-6IutYD40v9lfXGSVbWrd47UdJEC) for the details of this dataset.
## Training & Evaluation
```
python scripts/run.py
```
All the results will be save in the ''outputs'' directory.
## Acknowledgements
This project is built upon [Scaffold-GS](https://city-super.github.io/scaffold-gs). The in-painted images are obtained by [SD-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting) and [LaMa](https://github.com/advimman/lama). The depth maps are estimated by [Marigold](https://marigoldmonodepth.github.io/). The dataset we used is proposed by [SPIN-NeRF](https://spinnerf3d.github.io/). Kudos to these researchers.
## Citation
```BibTeX
@inproceedings{wang2024gscream,
title={GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal},
author={Wang, Yuxin and Wu, Qianyi and Zhang, Guofeng and Xu, Dan},
booktitle={ECCV},
year={2024}
}
```
|
https://github.com/ergrelet/themida-unmutate
|
themida-unmutate
Static deobfuscator for Themida, WinLicense and Code Virtualizer 3.x's mutation-based obfuscation.
Languages: Python (100.0%)
.github/workflows
.github/workflows
docs
docs
themida_unmutate
themida_unmutate
...
.gitignore
.gitignore
CHANGELOG.md
CHANGELOG.md
LICENSE
LICENSE
README.md
README.md
poetry.lock
poetry.lock
> README.md
# themida-unmutate
[](https://github.com/ergrelet/themida-unmutate/releases) [](https://www.python.org/downloads/) 
A Python 3 tool to statically deobfuscate functions protected by Themida,
WinLicense and Code Virtualizer 3.x's mutation-based obfuscation.
The tool has been **tested on Themida up to version 3.1.9**. It's expected to
work on WinLicense and Code Virtualizer as well.
A Binary Ninja plugin is also available [here](https://github.com/ergrelet/themida-unmutate-bn).
## Features
- Automatically resolve trampolines' destination addresses
- Statically deobfuscate mutated functions
- Rebuild fully working binaries
## Known Limitations
- Doesn't support ARM64 binaries
## How to Download
You can install the project with `pip`:
```
pip install themida-unmutate
```
A standalone PyInstaller build is available for Windows in "Releases".
## How to Use
Here's what the CLI looks like:
```
$ themida-unmutate --help
usage: themida-unmutate [-h] -a ADDRESSES [ADDRESSES ...] -o OUTPUT [--no-trampoline] [--reassemble-in-place] [-v] protected_binary
Automatic deobfuscation tool for Themida's mutation-based protection
positional arguments:
protected_binary Protected binary path
options:
-h, --help show this help message and exit
-a ADDRESSES [ADDRESSES ...], --addresses ADDRESSES [ADDRESSES ...]
Addresses of the functions to deobfuscate
-o OUTPUT, --output OUTPUT
Output binary path
--no-trampoline Disable function unwrapping
--reassemble-in-place
Rewrite simplified code over the mutated code rather than in a new code section
-v, --verbose Enable verbose logging
```
|
https://github.com/LeCAR-Lab/dial-mpc
|
dial-mpc
Official implementation for the paper "Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing". DIAL-MPC is a novel sampling-based MPC framework for legged robot full-order torque-level control with both precision and agility in a training-free manner.
Languages: Python (100.0%)
.github/workflows
.github/workflows
assets
assets
dial_mpc
dial_mpc
images
images
...
.gitignore
.gitignore
CITATION.cff
CITATION.cff
LICENSE
LICENSE
README.md
README.md
setup.py
setup.py
> README.md
# DIAL-MPC: Diffusion-Inspired Annealing For Legged MPC
<div align="center">
ICRA 2025, Best Paper Finalist
[[Website]](https://lecar-lab.github.io/dial-mpc/)
[[PDF]](https://drive.google.com/file/d/1Z39MCvnl-Tdraon4xAj37iQYLsUh5UOV/view?usp=sharing)
[[Arxiv]](https://arxiv.org/abs/2409.15610)
[<img src="https://img.shields.io/badge/Backend-Jax-red.svg"/>](https://github.com/google/jax)
[](https://opensource.org/licenses/Apache-2.0)
<img src="assets/joint.gif" width="600px"/>
</div>
This repository contains the code (simulation and real-world experiments with minimum setup) for the paper "Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing".
DIAL-MPC is a sampling-based MPC framework for legged robot ***full-order torque-level*** control with both precision and agility in a ***training-free*** manner.
DIAL-MPC is designed to be simple and flexible, with minimal requirements for specific reward design and dynamics model. It directly samples and rolls out in physics-based simulations (``Brax``) and does not require reduced-order modeling, linearization, convexification, or predefined contact sequences.
That means you can test out the controller in a plug-and-play manner with minimum setup.
## News
- 05/19/2025: 🫰 New demo for ball-spinning on finger can be run with `dial-mpc --example allegro_reorient`.
- 04/24/2025: 🎉 DIAL-MPC made into the best paper final list of ICRA 2025.
- 11/03/2024: 🎉 Sim2Real pipeline is ready! Check out the [Sim2Real](#deploy-in-real-unitree-go2) section for more details.
- 09/25/2024: 🎉 DIAL-MPC is released with open-source codes! Sim2Real pipeline coming soon!
https://github.com/user-attachments/assets/f2e5f26d-69ac-4478-872e-26943821a218
## Table of Contents
1. [Install](#install-dial-mpc)
2. [Synchronous Simulation](#synchronous-simulation)
3. [Asynchronous Simulation](#asynchronous-simulation)
4. [Deploy in Real](#deploy-in-real-unitree-go2)
5. [Writing Your Own Environment](#writing-custom-environment)
6. [Rendering Rollouts](#rendering-rollouts-in-blender)
7. [Citing this Work](#bibtex)
## Simulation Setup
### Install `dial-mpc`
> [!IMPORTANT]
> We recommend Ubuntu >= 20.04 + Python >= 3.10 + CUDA >= 12.3.
> You can create a mamba (or conda) environment before proceeding.
Our environment is Ubuntu 22.04 + Python 3.10 + CUDA 12.6.
```bash
git clone https://github.com/LeCar-Lab/dial-mpc.git --depth 1
cd dial-mpc
pip3 install -e .
```
## Synchronous Simulation
In this mode, the simulation will wait for DIAL-MPC to finish computing before stepping. It is ideal for debugging and doing tasks that are currently not real-time.
#### Run Examples
List available examples:
```bash
dial-mpc --list-examples
```
Run an example:
```bash
dial-mpc --example unitree_h1_jog
```
After rollout completes, go to `127.0.0.1:5000` to visualize the rollouts.
## Asynchronous Simulation
The asynchronous simulation is meant to test the algorithm before Sim2Real. The simulation rolls out in real-time (or scaled by `real_time_factor`). DIAL-MPC will encounter delay in this case.
When DIAL-MPC cannot finish the compute in the time defined by `dt`, it will spit out warning. Slight overtime is accepetable as DIAL-MPC maintains a buffer of the previous step's solution and will play out the planned action sequence until the buffer runs out.
List available examples:
```bash
dial-mpc-sim --list-examples
```
Run an example:
In terminal 1, run
```bash
dial-mpc-sim --example unitree_go2_seq_jump_deploy
```
This will open a mujoco visualization window.
In terminal 2, run
```bash
dial-mpc-plan --example unitree_go2_seq_jump_deploy
```
## Deploy in Real (Unitree Go2)
### Overview
The real-world deployment procedure is very similar to asynchronous simulation.
We use `unitree_sdk2_python` to communicate with the robot directly via CycloneDDS.
### Step 1: State Estimation
For state estimation, this proof-of-concept work requires external localization module to get base **position** and **velocity**.
The following plugins are built-in:
- ROS2 odometry message
- Vicon motion capture system
#### Option 1: ROS2 odometry message
Configure `odom_topic` in the YAML file. You are responsible for publishing this message at at least 50 Hz and ideally over 100 Hz. We provide an odometry publisher for Vicon motion capture system in [`vicon_interface`](https://github.com/LeCAR-Lab/vicon_interface).
> [!CAUTION]
> All velocities in ROS2 odometry message **must** be in **body frame** of the base to conform to [ROS odometry message definition](https://docs.ros.org/en/noetic/api/nav_msgs/html/msg/Odometry.html), although in the end they are converted to world frame in DIAL-MPC.
#### Option 2: Vicon (no ROS2 required)
1. `pip install pyvicon-datastream`
2. Change `localization_plugin` to `vicon_shm_plugin` in the YAML file.
3. Configure `vicon_tracker_ip`, `vicon_object_name`, and `vicon_z_offset` in the YAML file.
#### Option 3: Bring Your Own Plugin
We provide a simple ABI for custom localization modules, and you need to implement this in a python file in your workspace, should you consider not using the built-in plugins.
```python
import numpy as np
import time
from dial_mpc.deploy.localization import register_plugin
from dial_mpc.deploy.localization.base_plugin import BaseLocalizationPlugin
class MyPlugin(BaseLocalizationPlugin):
def __init__(self, config):
pass
def get_state(self):
qpos = np.zeros(7)
qvel = np.zeros(6)
return np.concatenate([qpos, qvel])
def get_last_update_time(self):
return time.time()
register_plugin('custom_plugin', plugin_cls=MyPlugin)
```
> [!CAUTION]
> When writing custom localization plugin, velocities should be reported in **world frame**.
> [!NOTE]
> Angular velocity source is onboard IMU. You could leave `qvel[3:6]` in the returned state as zero for now.
Localization plugin can be changed in the configuration file. A `--plugin` argument can be supplied to `dial-mpc-real` to import a custom localization plugin in the current workspace.
### Step 2: Installing `unitree_sdk2_python`
> [!NOTE]
> If you are already using ROS2 with Cyclone DDS according to [ROS2 documentation on Cyclone DDS](https://docs.ros.org/en/humble/Installation/DDS-Implementations/Working-with-Eclipse-CycloneDDS.html), you don't have to install Cyclone DDS as suggested by `unitree_sdk2_python`. But do follow the rest of the instructions.
Follow the instructions in [`unitree_sdk2_python`](https://github.com/unitreerobotics/unitree_sdk2_python).
### Step 3: Configuring DIAL-MPC
In `dial_mpc/examples/unitree_go2_trot_deploy.yaml` or `dial_mpc/examples/unitree_go2_seq_jump.yaml`, modify `network_interface` to match the name of the network interface connected to Go2.
Alternatively, you can also pass `--network_interface` to `dial-mpc-real` when launching the robot, which will override the config.
### Step 4: Starting the Robot
Follow the [official Unitree documentation](https://support.unitree.com/home/en/developer/Quick_start) to disable sports mode on Go2. Lay the robot flat on the ground like shown.
<div style="text-align: center;">
<img src="images/go2.png" alt="Unitree Go2 laying flat on the ground." style="width:50%;">
</div>
### Step 5: Running the Robot
List available examples:
```bash
dial-mpc-real --list-examples
```
Run an example:
In terminal 1, run
```bash
# source /opt/ros/<ros-distro>/setup.bash # if using ROS2
dial-mpc-real --example unitree_go2_seq_jump_deploy
```
This will open a mujoco visualization window. The robot will slowly stand up. If the robot is squatting, manually lift the robot into a standing position. Verify that the robot states match the real world and are updating.
You can supply additional arguments to `dial-mpc-real`:
- `--custom-env`: custom environment definition.
- `--network-interface`: override network interface configuration.
- `--plugin`: custom localization plugin.
Next, in terminal 2, run
```bash
dial-mpc-plan --example unitree_go2_seq_jump_deploy
```
## Writing Custom Environment
1. If custom robot model is needed, Store it in `dial_mpc/models/my_model/my_model.xml`.
2. Import the base environment and config.
3. Implement required functions.
4. Register environment.
5. Configure config file.
Example environment file (`my_env.py`):
```python
from dataclasses import dataclass
from brax import envs as brax_envs
from brax.envs.base import State
from dial_mpc.envs.base_env import BaseEnv, BaseEnvConfig
import dial_mpc.envs as dial_envs
@dataclass
class MyEnvConfig(BaseEnvConfig):
arg1: 1.0
arg2: "test"
class MyEnv(BaseEnv):
def __init__(self, config: MyEnvConfig):
super().__init__(config)
# custom initializations below...
def make_system(self, config: MyEnvConfig) -> System:
model_path = ("my_model/my_model.xml")
sys = mjcf.load(model_path)
sys = sys.tree_replace({"opt.timestep": config.timestep})
return sys
def reset(self, rng: jax.Array) -> State:
# TODO: implement reset
def step(self, state: State, action: jax.Array) -> State:
# TODO: implement step
brax_envs.register_environment("my_env_name", MyEnv)
dial_envs.register_config("my_env_name", MyEnvConfig)
```
Example configuration file (`my_env.yaml`):
```yaml
# DIAL-MPC
seed: 0
output_dir: dial_mpc_ws/my_model
n_steps: 400
env_name: my_env_name
Nsample: 2048
Hsample: 25
Hnode: 5
Ndiffuse: 4
Ndiffuse_init: 10
temp_sample: 0.05
horizon_diffuse_factor: 1.0
traj_diffuse_factor: 0.5
update_method: mppi
# Base environment
dt: 0.02
timestep: 0.02
leg_control: torque
action_scale: 1.0
# My Env
arg1: 2.0
arg2: "test_2"
```
Run the following command to use the custom environment in synchronous simulation. Make sure that `my_env.py` is in the same directory from which the command is run.
```bash
dial-mpc --config my_env.yaml --custom-env my_env
```
You can also run asynchronous simulation with the custom environment:
```bash
# Terminal 1
dial-mpc-sim --config my_env.yaml --custom-env my_env
# Terminal 2
dial-mpc-plan --config my_env.yaml --custom-env my_env
```
## Rendering Rollouts in Blender
If you want better visualization, you can check out the `render` branch for the Blender visualization examples.
## Acknowledgements
* This codebase's environment and RL implementation is built on top of [Brax](https://github.com/google/brax).
* We use [Mujoco MJX](https://github.com/deepmind/mujoco) for the physics engine.
* Controller design and implementation is inspired by [Model-based Diffusion](https://github.com/LeCAR-Lab/model-based-diffusion).
## BibTeX
If you find this code useful for your research, please consider citing:
```bibtex
@misc{xue2024fullordersamplingbasedmpctorquelevel,
title={Full-Order Sampling-Based MPC for Torque-Level Locomotion Control via Diffusion-Style Annealing},
author={Haoru Xue and Chaoyi Pan and Zeji Yi and Guannan Qu and Guanya Shi},
year={2024},
eprint={2409.15610},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2409.15610},
}
```
|
https://github.com/Team-Resurgent/PrometheOS-Firmware
|
PrometheOS-Firmware
Languages: C++ (49.8%), C (42.6%), CSS (5.4%), C# (0.9%), JavaScript (0.7%), HTML (0.6%)
Build
Build
Input
Input
PrometheOSUtility
PrometheOSUtility
PrometheOSXbe
PrometheOSXbe
...
LICENSE-GPLv2.md
LICENSE-GPLv2.md
LICENSE.md
LICENSE.md
README.md
README.md
> README.md
<div align="center">
# Team Resurgent Presents, PrometheOS
**A Modern Xenium OS for the Original Xbox**
[](https://github.com/Team-Resurgent/Repackinator/blob/main/LICENSE.md)
[](https://discord.gg/VcdSfajQGK)
[](https://ko-fi.com/J3J7L5UMN)
[](https://www.patreon.com/teamresurgent)
</div>
<div>
Please see below for instructions regarding PrometheOS and how to use.
BEWARE: It is recommended you have a external xenium programmer incase you're no longer able to boot into PrometheOS
# Working Modchips
* Aladdin 1mb
* Aladdin 2mb
* Xchanger V2.5
* Xecuter 3 (Genuine *Purple* + Christmas Edition *Red & White*)
* Open Xenium
* Legacy Xenium
* Modxo (Works with Offical, YD-RP2040 & RP2040 Zero pico's)
## System Requirements
### Minimum
* Visual Studio 2003 With XDK (5933 Recommended)
* Visual Studio 2022
## How To Contribute
Create a PR and we will be more than happy to review and update appropriately
## Instructions
### Building PromethOS Web
* Open PrometheOSUtility\PrometheOSUtility.sln in VS2022 select PrometheOSWeb as startup project
* Build and run
Notes:
1) If changing any web files and want to test with the XBE you can then update the XBE using instructions described 'Packaging PromethOS firmware'
2) In each of the js files there is a http://{ipaddress} change ipaddress to that of your xbox running PrometheOS to fully test.
### Building XBE / Testing
BEWARE: Certain actions will write to Xenium flash unless you disable Xenium flash by disabling the define ENABLE_XENIUM in xenium.cpp to simulate the flash instead.
* Open PrometheOSXbe\PrometheOSXbe.sln in VS2003
* Compile as Debug to test or Release for Packaging described in 'Packaging PromethOS firmware'
### Packaging PromethOS firmware
* Open PrometheOSUtility\PrometheOSUtility.sln in VS2022 select to build PrometheOSPacker as startup project
* Build and run
* Follow on screen prompts
* Flash your xenium to test (Beware it is recommended you have a xenium)
Notes:
1) Modify variable prometheosWebTestIp with IP used in web .js files (this is essential so that when ran on Xbox it uses local xbox's IP)
2) If you want to embed a installer logo modify the installerName variable appropriately
3) If you would like the packaged result to be uploaded to Xbox / Xenium Programmer uncomment the FTP section of code entering relevant FTP details
Notes:
1) You will need to provide mcpx_1.0.bin in the roms folder in order to run
2) The XEMU is a special build of XEMU which fully simulates a Xenium Modchip and is based upon intial code by Ryzee119 and extended upon to create a full implementation of the flash. (https://github.com/Team-Resurgent/xemu/tree/xenium)
Attribution:
This project include a small portion of software developed by MakeMHz<br/>
Original project: https://github.com/MakeMHz/xbox-hd-plus-app<br/>
Licensed under the GNU General Public License version 2. See LICENSE-GPLv2.md.
</div>
|
https://github.com/beowolx/rensa
|
rensa
High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datasets
Languages: Python (56.7%), Rust (43.3%)
.devcontainer
.devcontainer
.github/workflows
.github/workflows
assets
assets
benchmarks
benchmarks
src
src
...
.gitignore
.gitignore
Cargo.lock
Cargo.lock
Cargo.toml
Cargo.toml
LICENSE
LICENSE
README.md
README.md
> README.md
# Rensa: A novel high-performance MinHash Implementation in Rust
## Introduction
Rensa (Swedish for "clean") is a high-performance MinHash suite written in Rust with Python bindings. It's designed for efficient similarity estimation and deduplication of large datasets. **It's 40x faster than `datasketch` for MinHash operations while producing the same results and consuming less memory.**

Rensa initially implemented a variant of the MinHash algorithm (`R-MinHash`) that combined ideas from traditional MinHash and the C-MinHash algorithm. It now also offers a more direct `C-MinHash` implementation.
Rensa is particularly useful in scenarios where you need to:
- Quickly estimate the similarity between large sets of data
- Deduplicate large datasets
- Perform locality-sensitive hashing (LSH) for approximate nearest neighbor search
Use cases include:
- Content deduplication in large document collections
- Identifying similar items in recommendation systems
- Clustering of high-dimensional data
- Near-duplicate detection in web crawling
## Quick Start with Google Colab
Want to try Rensa right away? Check out our interactive Google Colab notebook that demonstrates how to use Rensa to deduplicate a dataset from Hugging Face:
[](https://colab.research.google.com/drive/1o1nzwXWAa8kdkEJljbJFW1VuI-3VZLUn?usp=sharing)
Thanks [mlabonne](https://github.com/mlabonne) for the Colab notebook!
## Table of Contents
- [Rensa: A novel high-performance MinHash Implementation in Rust](#rensa-a-novel-high-performance-minhash-implementation-in-rust)
- [Introduction](#introduction)
- [Technical Implementation](#technical-implementation)
- [R-MinHash (Original Rensa Variant)](#r-minhash-original-rensa-variant)
- [C-MinHash (Based on the C-MinHash Paper)](#c-minhash-based-on-the-c-minhash-paper)
- [Installation](#installation)
- [Usage Example](#usage-example)
- [Deduplicating with Direct MinHash](#deduplicating-with-direct-minhash)
- [Using C-MinHash for Similarity](#using-c-minhash-for-similarity)
- [Deduplicating with RMinHashLSH](#deduplicating-with-rminhashlsh)
- [Inline Deduplication for Streaming Data](#inline-deduplication-for-streaming-data)
- [Algorithm Comparison: R-MinHash vs. C-MinHash vs. Datasketch](#algorithm-comparison-r-minhash-vs-c-minhash-vs-datasketch)
- [Benchmark Results](#benchmark-results)
- [MinHash Implementations Speed](#minhash-implementations-speed)
- [MinHash Implementations Accuracy (Deduplication Results)](#minhash-implementations-accuracy-deduplication-results)
- [LSH Performance (RMinHashLSH vs. Datasketch MinHashLSH)](#lsh-performance-rminhashlsh-vs-datasketch-minhashlsh)
- [Running the Benchmarks](#running-the-benchmarks)
- [Limitations and Future Work](#limitations-and-future-work)
- [Contributing](#contributing)
- [License](#license)
## Technical Implementation
Rensa offers two high-performance MinHash variants in Rust: `R-MinHash` (its original novel approach) and `C-MinHash` (an implementation closely following the C-MinHash paper). Both are designed for efficient similarity estimation and leverage common strategies for speed and memory efficiency:
- **Fast Hash Functions**: Rensa employs fast, non-cryptographic hash functions (based on FxHash or Murmur3) for processing input items.
- **Memory-Efficient Data Structures**: Implementations use compact data structures to minimize memory usage while maintaining fast access times.
- **Optimized Routines**: Core operations are optimized using techniques like batch processing and vectorized operations where appropriate.
### R-MinHash (Original Rensa Variant)
This variant was Rensa's initial novel approach. Key aspects of Rensa's `RMinHash` implementation include:
1. **Efficient Permutation Generation**: Instead of storing full permutations or using k independent hash functions, Rensa's `RMinHash` uses a unique pair of random numbers (a, b) for each of the `num_perm` permutations. These are used to generate hash values on-the-fly for each item.
2. **Simplified Approach**: While inspired by ideas related to C-MinHash, `RMinHash` is a distinct, simpler approach.
- It does not apply an initial global permutation (σ) to the input data's hash in the same way as described in the C-MinHash paper for its primary permutation step.
- It uses `num_perm` distinct pairs of random numbers (a, b) to simulate `num_perm` independent hash functions, rather than deriving them from a smaller set of parameters in a circulant manner.
3. **Trade-off**: `RMinHash`'s approach trades some of the potential variance reduction benefits of more complex MinHash schemes (like full C-MinHash) for simplicity and good performance. It still offers better performance than traditional MinHash in many scenarios.
Rensa's Locality-Sensitive Hashing (LSH) implementation, `RMinHashLSH`, currently utilizes the `RMinHash` variant for its index.
### C-MinHash (Based on the C-MinHash Paper)
Rensa also includes `CMinHash`, an implementation more directly aligned with the principles of the C-MinHash algorithm from the paper "[C-MinHash: Rigorously Reducing K Permutations to Two](https://arxiv.org/abs/2109.03337)". Key aspects of this implementation are:
1. **Two-Stage Hashing**: It utilizes two sets of universal hash function parameters for its permutation scheme:
- An initial hash transformation (σ) is applied to the hash of each input item using parameters `sigma_a` and `sigma_b`.
- A second pair of parameters, `pi_c` and `pi_d`, are used in combination with the σ-transformed item hash to generate the `num_perm` values in the MinHash signature. Specifically, for the `k`-th hash slot (where `k` is from 0 to `num_perm-1`), the value is derived from `(pi_c * sigma_transformed_hash + (pi_c * k + pi_d))`. The `(pi_c * k + pi_d)` terms are precomputed for each `k` to enhance efficiency.
2. **Highly Optimized Routines**: The `update` and `jaccard` methods in `CMinHash` are heavily optimized. This includes batch processing of input items, structuring calculations to improve cache utilization, and using vectorized operations (e.g., processing data in fixed-size chunks like blocks of 16 or 8) for faster computations.
3. **Performance Focus**: This implementation is specifically engineered for maximum single-threaded performance through these aggressive optimizations and careful memory access patterns.
These design choices result in a suite of MinHash implementations that are fast, memory-efficient, and suitable for large-scale similarity estimation and deduplication tasks. Benchmarks show that Rensa's implementations offer significant performance improvements over traditional MinHash libraries like `datasketch`.
## Installation
You can install Rensa using `pip`. It's available in all platforms:
```bash
pip install rensa
```
## Usage Example
### Deduplicating with Direct MinHash
Here's an example of how to use Rensa's MinHash implementations (e.g., `RMinHash`, `CMinHash`) for direct deduplication:
```python
from datasets import load_dataset
from rensa import RMinHash, CMinHash
from tqdm import tqdm
# Define a function to generate MinHash (works for RMinHash, CMinHash)
def generate_minhash_signature(text, minhash_class, num_perm=128, seed=42):
m = minhash_class(num_perm=num_perm, seed=seed)
m.update(text.split())
return m
def deduplicate_dataset_direct(dataset, text_column="sql", minhash_class=RMinHash, num_perm=128, desc="Deduplicating"):
unique_hashes = set()
deduplicated_indices = []
for idx, example in tqdm(enumerate(dataset), total=len(dataset), desc=desc):
minhash_obj = generate_minhash_signature(example[text_column], minhash_class, num_perm)
hash_tuple = tuple(minhash_obj.digest())
if hash_tuple not in unique_hashes:
unique_hashes.add(hash_tuple)
deduplicated_indices.append(idx)
return deduplicated_indices
def main_direct_deduplication():
print("Loading dataset...")
sql_dataset_dict = load_dataset("gretelai/synthetic_text_to_sql")
sql_dataset = sql_dataset_dict["train"]
print("Deduplicating dataset with R-MinHash...")
deduplicated_indices_r = deduplicate_dataset_direct(
sql_dataset,
text_column="sql",
minhash_class=RMinHash,
desc="R-MinHash Deduplication"
)
deduplicated_dataset_r = sql_dataset.select(deduplicated_indices_r)
print(f"Original dataset size: {len(sql_dataset)}")
print(f"Deduplicated dataset size (R-MinHash): {len(deduplicated_dataset_r)}")
print(f"Rows removed (R-MinHash): {len(sql_dataset) - len(deduplicated_dataset_r)}")
# Example with C-MinHash
# print("Deduplicating dataset with C-MinHash...")
# deduplicated_indices_c = deduplicate_dataset_direct(
# sql_dataset,
# text_column="sql",
# minhash_class=CMinHash,
# desc="C-MinHash Deduplication"
# )
# deduplicated_dataset_c = sql_dataset.select(deduplicated_indices_c)
# print(f"Deduplicated dataset size (C-MinHash): {len(deduplicated_dataset_c)}")
if __name__ == "__main__":
main_direct_deduplication()
```
### Using C-MinHash for Similarity
Here's a more direct example of using `CMinHash` for calculating Jaccard similarity:
```python
from rensa import CMinHash
# Example texts
text1 = "This is an example sentence for CMinHash."
text2 = "This is another example sentence, slightly different from the first."
# Initialize CMinHash objects
num_permutations = 256
seed = 12345
c_minhash1 = CMinHash(num_perm=num_permutations, seed=seed)
c_minhash2 = CMinHash(num_perm=num_permutations, seed=seed)
# Update with words from each text
c_minhash1.update(text1.split())
c_minhash2.update(text2.split())
# Calculate Jaccard similarity
similarity = c_minhash1.jaccard(c_minhash2)
print(f"Estimated Jaccard similarity (CMinHash, {num_permutations} perm): {similarity:.4f}")
# Get signatures
signature1 = c_minhash1.digest()
# print(f"C-MinHash signature 1: {signature1}")
```
### Deduplicating with RMinHashLSH
Here's an example of how to use `RMinHashLSH` for deduplicating a dataset. This approach is more efficient for larger datasets. Key LSH parameters are set to example values within the function.
```python
from datasets import load_dataset
from rensa import RMinHash, RMinHashLSH
from tqdm import tqdm
def deduplicate_dataset_with_lsh_simple(dataset, text_column="sql"):
num_perm = 128
seed = 42
lsh_threshold = 0.8
num_bands = 16
final_jaccard_threshold = 0.85
if num_perm % num_bands != 0:
raise ValueError(f"num_bands ({num_bands}) must divide num_perm ({num_perm}).")
minhashes = {}
for idx, example in tqdm(enumerate(dataset), total=len(dataset), desc="1. Generating RMinHashes"):
text_content = str(example[text_column])
tokens = text_content.split()
m = RMinHash(num_perm=num_perm, seed=seed)
m.update(tokens)
minhashes[idx] = m
lsh_index = RMinHashLSH(threshold=lsh_threshold, num_perm=num_perm, num_bands=num_bands)
for doc_id, rminhash_obj in tqdm(minhashes.items(), desc="2. Indexing into LSH"):
lsh_index.insert(doc_id, rminhash_obj)
to_remove = set()
sorted_doc_ids = sorted(minhashes.keys())
for doc_id in tqdm(sorted_doc_ids, desc="3. Querying LSH & Deduplicating"):
if doc_id in to_remove:
continue
query_minhash = minhashes[doc_id]
candidate_ids = lsh_index.query(query_minhash)
for candidate_id in candidate_ids:
if candidate_id == doc_id or candidate_id in to_remove:
continue
candidate_minhash = minhashes[candidate_id]
actual_jaccard = query_minhash.jaccard(candidate_minhash)
if actual_jaccard >= final_jaccard_threshold:
# Keep the item with the smaller original index
if doc_id < candidate_id:
to_remove.add(candidate_id)
else:
to_remove.add(doc_id)
break
deduplicated_indices = [idx for idx in sorted_doc_ids if idx not in to_remove]
return deduplicated_indices
def main_lsh_deduplication_simple():
print("Loading dataset...")
try:
sql_dataset_dict = load_dataset("gretelai/synthetic_text_to_sql")
sql_dataset = sql_dataset_dict["train"]
except Exception as e:
print(f"Failed to load dataset: {e}. Ensure 'datasets' is installed or use a local dataset.")
return
print("Deduplicating dataset with RMinHashLSH...")
deduplicated_indices_lsh = deduplicate_dataset_with_lsh_simple(
sql_dataset,
text_column="sql"
)
deduplicated_dataset_lsh = sql_dataset.select(deduplicated_indices_lsh)
print(f"Original dataset size (train split): {len(sql_dataset)}")
print(f"Deduplicated dataset size (RMinHashLSH): {len(deduplicated_dataset_lsh)}")
print(f"Rows removed (RMinHashLSH): {len(sql_dataset) - len(deduplicated_dataset_lsh)}")
if __name__ == "__main__":
main_lsh_deduplication_simple()
```
### Inline Deduplication for Streaming Data
Rensa now supports inline deduplication, perfect for scenarios where you receive continuous streams of data and need to check each new record against existing ones in real-time.
```python
from rensa import RMinHash, RMinHashDeduplicator
def inline_deduplication_example():
# Initialize the deduplicator with similarity threshold
# Adjust LSH parameters for the threshold
deduplicator = RMinHashDeduplicator(
threshold=0.7, # Jaccard similarity threshold
num_perm=128, # Number of permutations
use_lsh=True, # Use LSH for efficiency
num_bands=32, # More bands = more sensitive to lower similarities
)
# Simulate streaming data with varying similarities
document_stream = [
{"id": "001", "text": "The quick brown fox jumps over the lazy dog"},
{
"id": "002",
"text": "The quick brown fox jumps over the lazy dog today",
}, # Very similar
{
"id": "003",
"text": "A fast brown fox leaps over a sleepy dog",
}, # Somewhat similar
{"id": "004", "text": "Lorem ipsum dolor sit amet consectetur"},
{
"id": "005",
"text": "The quick brown fox jumps over the lazy dog",
}, # Exact duplicate
{
"id": "006",
"text": "Quick brown foxes jump over lazy dogs",
}, # Similar paraphrase
{"id": "007", "text": "Completely different content here"},
]
# Process each document as it arrives
for doc in document_stream:
# Create MinHash for the new document
minhash = RMinHash(num_perm=128, seed=42)
minhash.update(doc["text"].split())
# Check if it's a duplicate
if deduplicator.is_duplicate(doc["id"], minhash):
# Find which documents it duplicates
duplicates = deduplicator.get_duplicates(minhash)
print(f"Document {doc['id']} is a duplicate of: {duplicates}")
else:
# Add to the deduplicator if unique
if deduplicator.add(doc["id"], minhash):
print(f"Document {doc['id']} added (unique)")
print(f"\nTotal unique documents: {deduplicator.len()}")
if __name__ == "__main__":
inline_deduplication_example()
```
#### Inline Deduplication API
All deduplicators (`RMinHashDeduplicator`, `CMinHashDeduplicator`) support the following methods:
- `add(key: str, minhash) -> bool`: Add a new item if it's not a duplicate. Returns True if added.
- `is_duplicate(key: str, minhash) -> bool`: Check if an item is a duplicate without adding it.
- `get_duplicates(minhash) -> List[str]`: Get list of keys that are duplicates of the given MinHash.
- `remove(key: str) -> bool`: Remove an item from the deduplicator.
- `len() -> int`: Get the number of unique items stored.
- `clear()`: Remove all items from the deduplicator.
**Performance Tips for Inline Deduplication:**
1. **Use LSH for large datasets**: When dealing with thousands of documents, enable LSH (`use_lsh=True`) for `RMinHashDeduplicator`.
2. **Adjust threshold**: Lower thresholds catch more duplicates but may have false positives.
3. **Batch when possible**: If you receive data in small batches, process them together for better performance.
4. **Memory management**: For very large datasets, consider implementing a sliding window or periodic cleanup of old entries.
## Algorithm Comparison: R-MinHash vs. C-MinHash vs. Datasketch
Rensa offers two MinHash implementations (`RMinHash`, `CMinHash`), each with different trade-offs compared to each other and the popular `datasketch` library.
Based on the latest `advanced_benchmark.py` results (averaged over 5 runs on the `gretelai/synthetic_text_to_sql` dataset, 100,000 rows, in a Macbook Pro M2 32GB):
* **Speed (at 256 permutations)**:
* **`CMinHash`** is consistently the fastest. Average execution time: **5.47 seconds**.
* **`RMinHash`** is also very fast. Average execution time: **5.58 seconds**.
* **`datasketch`** is considerably slower. Average execution time: **92.45 seconds**.
This makes `CMinHash` up to approximately **16.90x faster** than `datasketch` and `RMinHash` up to approximately **16.57x faster** (all at 256 permutations).
* **Accuracy (Jaccard Similarity of Deduplicated Sets vs. Datasketch, 128 permutations)**:
* **`RMinHash`** produces deduplication results identical to `datasketch` (Jaccard similarity of **1.0000** between their output sets of unique items, with 99262 common items).
* **`CMinHash`** also yields results very close to `datasketch`. The Jaccard similarity is **0.9996** (with 99223 common items with Datasketch).
This indicates that while both Rensa variants are highly effective for similarity estimation, `RMinHash` perfectly matches `datasketch`'s deduplication output in this benchmark and `CMinHash` produces extremely similar results.
* **Recommendation**:
* For most use cases, **`RMinHash`** provides an excellent balance of high speed (up to ~16.6x faster than `datasketch`) and accuracy (matching `datasketch`'s deduplication results). **It remains the generally recommended algorithm.**
* If absolute maximum throughput is the primary concern, **`CMinHash`** offers the best performance (up to ~16.9x faster than `datasketch`), with a negligible difference in exact deduplication results compared to `datasketch`/`RMinHash`.
* If you require features beyond core MinHash generation or need to integrate with an existing `datasketch` ecosystem, `datasketch` remains a comprehensive option, albeit slower for MinHash operations.
## Benchmark Results
Rensa offers significant performance advantages over traditional MinHash libraries like `datasketch`. Recent benchmarks demonstrate that Rensa's MinHash implementations are particularly powerful on large-scale, high-cardinality datasets.
### Large-Scale Benchmark (`Salesforce/wikitext`, 1.8 Million Rows)
The following benchmark was conducted using the large-scale `Salesforce/wikitext` dataset containing 1.8 million rows. This benchmark highlights Rensa's remarkable performance advantage:
* **R-MinHash** completed deduplication **\~39x faster** than `datasketch`.
* **C-MinHash** performance was similarly impressive.
**Performance comparison:**
| Algorithm | Execution Time (s) | Speedup vs Datasketch |
| --------------- | ------------------ | --------------------- |
| Datasketch | 1725 | - |
| Rensa R-MinHash | 44 | **\~39x faster** |
| Rensa C-MinHash | 42 | **\~41x faster** |
This benchmark clearly demonstrates Rensa's capability to handle large, diverse datasets at exceptional speed.
### Why Does Speedup Vary?
Benchmark speedups depend on dataset characteristics, including:
* **Cardinality (number of distinct elements)**
* **Document length and repetition rate**
High cardinality and large-scale datasets (such as `Salesforce/wikitext`) leverage Rensa's optimizations fully, achieving maximal performance gains.
### Previous Benchmarks (`gretelai/synthetic_text_to_sql`, 100K Rows)

Earlier benchmarks, using the smaller and more repetitive `gretelai/synthetic_text_to_sql` dataset, indicated approximately a **15x speedup** over `datasketch`. While still impressive, these speedups reflect differences in dataset characteristics:
| Algorithm | Execution Time (s) | Speedup vs Datasketch |
| -------------------- | ------------------ | --------------------- |
| Datasketch | 92.45 | - |
| Rensa R-MinHash | 5.58 | **\~16.6x faster** |
| Rensa C-MinHash | 5.47 | **\~16.9x faster** |
This demonstrates that Rensa significantly outperforms `datasketch` in all scenarios, with even greater gains on larger, high-cardinality datasets.
### Recommendation
* **Use `RMinHash` or `CMinHash`** for general purposes, especially with large-scale datasets, to achieve maximum performance.
Rensa consistently delivers high-performance MinHash implementations that outperform `datasketch` substantially, making it ideal for real-world, large-scale deduplication and similarity estimation tasks.
## Running the Benchmarks
To run the benchmarks yourself, follow these steps:
1. **Clone the repository:**
```bash
git clone https://github.com/beowolx/rensa.git
cd rensa
```
2. **Create a virtual environment:**
```bash
python3 -m venv venv
source venv/bin/activate
```
3. **Install the required dependencies:**
```bash
pip install -r requirements.txt
```
4. **Run the simple benchmark** (compares core MinHash deduplication, uses the dataset `gretelai/synthetic_text_to_sql` with 100K rows):
```bash
python benchmarks/simple_benchmark.py
```
5. **Run the advanced benchmark** (detailed comparison of RMinHash, CMinHash and Datasketch, uses the dataset `gretelai/synthetic_text_to_sql` with 100K rows):
```bash
python benchmarks/advanced_benchmark.py
```
6. **Run the wiki benchmark** (compares deduplication performance on the `Salesforce/wikitext` dataset with 1.8M rows):
```bash
python benchmarks/wiki_benchmark.py
```
## Limitations and Future Work
While Rensa offers significant performance improvements, it has some limitations compared to `datasketch`:
- **Feature set**: Rensa currently implements core MinHash (`RMinHash`, `CMinHash`) and LSH (for `RMinHash` via `RMinHashLSH`) functionality. It doesn't include some of the advanced features found in `datasketch` like HyperLogLog, etc.
- **Customization**: `datasketch` offers more options for customizing the hash functions and other parameters. Rensa's implementations are more fixed for performance but offer `seed` and `num_perm` customization.
- **Theoretical guarantees**:
- `RMinHash`, due to its simplified permutation generation, may not provide the same level of variance reduction as theoretically optimal MinHash or the full C-MinHash algorithm in all scenarios.
- `CMinHash` is designed to be a more faithful implementation of the C-MinHash paper's principles, aiming for stronger theoretical backing regarding its reduction of k permutations to two.
Future work on Rensa may include:
- Adding more advanced features and customization options
- Further optimizing performance for specific use cases and data types
- Potentially extending LSH support to other MinHash variants if beneficial.
Despite these limitations, Rensa's performance benefits make it an excellent choice for applications where speed and efficiency are critical, especially when working with large datasets.
## Contributing
Contributions to Rensa are welcome! Please feel free to submit pull requests, report bugs, or suggest features through the GitHub issue tracker.
## License
Rensa is released under the MIT License. See the LICENSE file for details.
|
https://github.com/ChefKissInc/QEMUAppleSilicon
|
QEMUAppleSilicon
Apple Silicon devices emulated on QEMU, currently only iPhone 11.
Languages:
.github
.github
.gitlab-ci.d
.gitlab-ci.d
.gitlab/issue_templates
.gitlab/issue_templates
accel
accel
audio
audio
...
.b4-config
.b4-config
.dir-locals.el
.dir-locals.el
.editorconfig
.editorconfig
.exrc
.exrc
.gdbinit
.gdbinit
> README.md
# QEMU Apple Silicon Fork
This is a fork of QEMU which provides Apple ARM device guest support.
> [!CAUTION]
> Please consider donating via [ko-fi](https://ko-fi.com/chefkiss) or BTC at `bc1qgu56kptepex2csuzl5nhzc4vxuj8c6ggjzhcem` to help continue the development.
Usage guide is located in the [wiki](https://github.com/ChefKissInc/QEMUAppleSilicon/wiki).
## Supported Devices
iPhone 11 running iOS 14.0 beta 5.
|
https://github.com/swiftlang/swift
|
swift
The Swift Programming Language
Languages: C++ (47.2%), Swift (44.7%), C (4.7%), Python (1.6%), CMake (0.9%), Objective-C (0.4%)
.github
.github
Runtimes
Runtimes
SwiftCompilerSources
SwiftCompilerSources
apinotes
apinotes
benchmark
benchmark
...
.clang-format
.clang-format
.clang-tidy
.clang-tidy
.dir-locals.el
.dir-locals.el
.editorconfig
.editorconfig
.flake8
.flake8
> README.md
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://www.swift.org/assets/images/swift~dark.svg">
<img src="https://www.swift.org/assets/images/swift.svg" alt="Swift logo" height="70">
</picture>
# Swift Programming Language
| **OS** | **Status** |
|---:|:---:|
| macOS | [](https://ci.swift.org/job/oss-swift-package-macos)|
| Ubuntu 20.04 | [](https://ci.swift.org/job/oss-swift-package-ubuntu-20_04) [](https://ci.swift.org/job/oss-swift-package-ubuntu-20_04-aarch64)|
| Ubuntu 22.04 | [](https://ci.swift.org/job/oss-swift-package-ubuntu-22_04) [](https://ci.swift.org/job/oss-swift-package-ubuntu-22_04-aarch64)|
| Ubuntu 24.04 | [](https://ci.swift.org/job/oss-swift-package-ubuntu-24_04) [](https://ci.swift.org/job/oss-swift-package-ubuntu-24_04-aarch64)|
| Amazon Linux 2 | [](https://ci.swift.org/job/oss-swift-package-amazon-linux-2) [](https://ci.swift.org/job/oss-swift-package-amazon-linux-2-aarch64)|
| Debian 12 | [](https://ci.swift.org/job/oss-swift-package-debian-12) [](https://ci.swift.org/job/oss-swift-package-debian-12-aarch64)|
| Windows 10 | [](https://ci-external.swift.org/job/swift-main-windows-toolchain) [](https://ci-external.swift.org/job/swift-main-windows-toolchain-arm64)|
| Universal Base Image 9 | [](https://ci.swift.org/job/oss-swift-package-ubi-9)|
|**Cross-Compilation Targets**||
| wasm32-unknown-wasi |[](https://ci.swift.org/job/oss-swift-pr-test-crosscompile-wasm-ubuntu-20_04)|
|**Community-Hosted CI Platforms**||
|[Android](https://github.com/swiftlang/swift-community-hosted-continuous-integration/blob/main/nodes/x86_64_ubuntu_24_04_android.json) | [](https://ci-external.swift.org/job/oss-swift-RA-linux-ubuntu-24.04-android-build) [](https://ci-external.swift.org/job/oss-swift-RA-linux-ubuntu-24.04-android-arm64)|
## Welcome to Swift
Swift is a high-performance system programming language. It has a clean
and modern syntax, offers seamless access to existing C and Objective-C code
and frameworks, and is memory-safe by default.
Although inspired by Objective-C and many other languages, Swift is not itself a
C-derived language. As a complete and independent language, Swift packages core
features like flow control, data structures, and functions, with high-level
constructs like objects, protocols, closures, and generics. Swift embraces
modules, eliminating the need for headers and the code duplication they entail.
To learn more about the programming language, visit [swift.org](https://swift.org/documentation/).
- [Contributing to Swift](#contributing-to-swift)
- [Getting Started](#getting-started)
- [Swift Toolchains](#swift-toolchains)
- [Build Failures](#build-failures)
- [Learning More](#learning-more)
## Contributing to Swift
Contributions to Swift are welcomed and encouraged! Please see the
[Contributing to Swift guide](https://swift.org/contributing/).
Before submitting the pull request, please make sure you have [tested your
changes](https://github.com/apple/swift/blob/main/docs/ContinuousIntegration.md)
and that they follow the Swift project [guidelines for contributing
code](https://swift.org/contributing/#contributing-code).
To be a truly great community, [Swift.org](https://swift.org/) needs to welcome
developers from all walks of life, with different backgrounds, and with a wide
range of experience. A diverse and friendly community will have more great
ideas, more unique perspectives, and produce more great code. We will work
diligently to make the Swift community welcoming to everyone.
To give clarity of what is expected of our members, Swift has adopted the
code of conduct defined by the Contributor Covenant. This document is used
across many open source communities, and we think it articulates our values
well. For more, see the [Code of Conduct](https://swift.org/code-of-conduct/).
## Getting Started
If you are interested in:
- Contributing fixes and features to the compiler: See our
[How to Submit Your First Pull Request guide](/docs/HowToGuides/FirstPullRequest.md).
- Building the compiler as a one-off: See our [Getting Started guide][].
- Building a toolchain as a one-off: Follow the [Getting Started guide][]
up until the "Building the project" section. After that, follow the
instructions in the [Swift Toolchains](#swift-toolchains) section below.
We also have an [FAQ](/docs/HowToGuides/FAQ.md) that answers common questions.
[Getting Started guide]: /docs/HowToGuides/GettingStarted.md
### Swift Toolchains
#### Building
Swift toolchains are created using the script
[build-toolchain](https://github.com/apple/swift/blob/main/utils/build-toolchain). This
script is used by swift.org's CI to produce snapshots and can allow for one to
locally reproduce such builds for development or distribution purposes. A typical
invocation looks like the following:
```sh
$ ./swift/utils/build-toolchain $BUNDLE_PREFIX
```
where ``$BUNDLE_PREFIX`` is a string that will be prepended to the build
date to give the bundle identifier of the toolchain's ``Info.plist``. For
instance, if ``$BUNDLE_PREFIX`` was ``com.example``, the toolchain
produced will have the bundle identifier ``com.example.YYYYMMDD``. It
will be created in the directory you run the script with a filename
of the form: ``swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz``.
Beyond building the toolchain, ``build-toolchain`` also supports the
following (non-exhaustive) set of useful options:
- ``--dry-run``: Perform a dry run build. This is off by default.
- ``--test``: Test the toolchain after it has been compiled. This is off by default.
- ``--distcc``: Use distcc to speed up the build by distributing the C++ part of
the swift build. This is off by default.
- ``--sccache``: Use sccache to speed up subsequent builds of the compiler by
caching more C++ build artifacts. This is off by default.
More options may be added over time. Please pass ``--help`` to
``build-toolchain`` to see the full set of options.
#### Installing into Xcode
On macOS if one wants to install such a toolchain into Xcode:
1. Untar and copy the toolchain to one of `/Library/Developer/Toolchains/` or
`~/Library/Developer/Toolchains/`. E.g.:
```sh
$ sudo tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz -C /
$ tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx.tar.gz -C ~/
```
The script also generates an archive containing debug symbols which
can be installed over the main archive allowing symbolication of any
compiler crashes.
```sh
$ sudo tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx-symbols.tar.gz -C /
$ tar -xzf swift-LOCAL-YYYY-MM-DD-a-osx-symbols.tar.gz -C ~/
```
2. Specify the local toolchain for Xcode's use via `Xcode->Toolchains`.
### Build Failures
Try the suggestions in
[Troubleshooting build issues](/docs/HowToGuides/GettingStarted.md#troubleshooting-build-issues).
Make sure you are using the
[correct release](/docs/HowToGuides/GettingStarted.md#installing-dependencies)
of Xcode.
If you have changed Xcode versions but still encounter errors that appear to
be related to the Xcode version, try passing `--clean` to `build-script`.
When a new version of Xcode is released, you can update your build without
recompiling the entire project by passing `--reconfigure` to `build-script`.
## Learning More
Be sure to look at the [documentation index](/docs/README.md) for a bird's eye
view of the available documentation. In particular, the documents titled
[Debugging the Swift Compiler](docs/DebuggingTheCompiler.md) and
[Continuous Integration for Swift](docs/ContinuousIntegration.md) are very
helpful to understand before submitting your first PR.
|
https://github.com/x86matthew/DOSVisor
|
DOSVisor
x86 Real-Mode MS-DOS Emulator using Windows Hypervisor Platform
Languages: C++ (59.8%), C (40.2%)
bin
bin
src
src
...
README.md
README.md
dosvisor_flaresay.png
dosvisor_flaresay.png
> README.md
# DOSVisor
## Overview
This project is a DOS emulator that utilises the Windows Hypervisor Platform (WHP) API to create a virtual CPU, enabling the execution of DOS programs within a virtualised 16-bit real-mode environment on modern Windows systems.
In my previous Windows 3.1 emulator, the 80286 CPU was fully emulated in software. However, for this project, I decided to take a different approach by using hardware virtualisation for the CPU while emulating only the DOS layer in software.
This emulator primarily serves as a demonstration of the WHP API and is not intended to function as a complete DOS emulator. Minimal DOS functionality has been implemented, sufficient only to run a few sample programs, although the framework has been designed to be easily extendable.
## Details
### CPU Initialisation
The emulator initialises a virtual CPU using the WHP API, configuring it to run in real-mode by ensuring that the PE (Protected Mode Enable) and PG (Paging) bits are disabled in the CR0 register.
### Memory Layout
The DOS executable is mapped into the emulator host and shared with the guest at the appropriate physical address, simulating the memory layout of a real DOS system.
### Interrupt Handling
Most DOS functionality occurs via interrupts, such as 0x21 for system services and 0x10 for video operations, which must be emulated manually in software. Interrupts do not inherently trigger VM exits, and require additional tricks to capture them. This is easily addressed by implementing a custom Interrupt Vector Table (IVT) where each interrupt points to a CPUID instruction. This configuration ensures that a VM exit is triggered when an interrupt occurs, enabling the emulator to intercept and handle the interrupt.
### I/O Port Support
DOS programs often make use of I/O ports, such as for the PC speaker, CMOS, and hardware timers. I/O requests are easily captured because the `in` and `out` instructions trigger a VM exit, allowing the functionality to be emulated. Currently, only a very small set of common I/O requests are supported.
### Timer Emulation
DOS programs used various methods for maintaining timing. This emulator includes support for a select few of these mechanisms:
- **System BIOS Counter at 0000:046C:** A 32-bit counter that increments at a rate of 18.2065 Hz.
- **RAM Refresh Toggle Bit at IO Port 0x61:** A single bit that toggles every 15 microseconds.
- **System Clock via INT 0x1A:** Returns the BIOS counter value from `0000:046C`, an alternative to accessing the memory directly.
- **RTC Registers via IO Port 0x70/0x71:** Returns the current time from the CMOS controller.
- **Current Time via INT 0x21:** Returns the current system time from the main DOS function dispatcher.
The 8253 PIT timer is not fully implemented at this time.
### Keyboard State Monitoring
The emulator monitors the keyboard state and maintains a simple key buffer, passing information to the DOS program via keyboard interrupts (0x16) as necessary.
### Terminal Command Support
Basic terminal commands are supported, allowing the emulator to run simple text-based games and other basic programs.
### System Fields
The emulator currently allocates space for system fields, such as BIOS data areas and the Program Segment Prefix (PSP). However, these fields are not currently populated, except for the BIOS counter value. Additional fields can be populated as necessary.
## Testing
Testing has been performed using a simple DOS game from FlareOn 2023 (FlareSay.exe, challenge #6), primarily because it was the only DOS executable on my hard disk at the time.

## Future
In addition to this DOS emulator, I have had some success with a similar concept aimed at emulating 64-bit/32-bit Windows user-mode programs. By building up a simulated CPL3 environment and hooking the MSR_LSTAR register, it is possible to emulate an executable and force a VM exit on system calls. This allows the emulator to capture and either forward the system call request to the host OS or intercept it. This technique does come with many complications, but there may be valid use cases where a lightweight emulator could be useful.
|
https://github.com/conflow-dev/ConFlow
|
ConFlow
Languages: C++ (84.7%), Python (11.4%), C (2.1%), Makefile (1.5%)
en_ops
en_ops
images
images
model_demo/DeepCrossing
model_demo/DeepCrossing
sgx_tf_ops
sgx_tf_ops
...
README.md
README.md
operators.py
operators.py
recompiler.py
recompiler.py
run.sh
run.sh
> README.md
<div align="center">
<!-- <!-- <h1 style="color: #33A6B8; font-family: Helvetica"> OmniLMM </h1> -->
**ConFlow**
<strong>中文 |
[English](./README_en.md)</strong>
</div>
**ConFlow**是一个支持迁移TEE安全计算到NNA(Neural Network Accelerators,神经网络加速器)的神经网络计算框架:
- **安全计算迁移**:可自动将线性算子以密文形式迁移到各类神经网络加速器上,例如GPU、NPU、TPU等,在保证数据以密文形式计算的同时,利用各类NNA的算力。
- **前端模型结构编译**:自动迁移的密文算子以线性拆分的形式确保安全性,通过前端模型描述语言的编译器,转化为框架支持的密态数据结构。
## 更新日志 <!-- omit in toc -->
#### 📌 置顶
* [2024.02] 我们更新了相关的文档及说明。
* [2023.12] 我们开源了模型描述编译器和适配TensorFlow的框架实现,能够将传统计算的模型描述,编译为支持此密文计算的模型结构。
* [2023.07] 我们提出了一种隐私保护机器学习框架,支持数据密态计算的同时只需增加少量的计算开销。
## 目录 <!-- omit in toc -->
- [ConFlow](#ConFlow)
- [环境](#环境)
- [代码结构](#代码结构)
- [使用指南](#使用指南)
- [安装](#安装)
- [简单使用与测试](#简单使用及测试)
- [开发自定义算子](#开发自定义算子)
- [协议](#协议)
- [声明](#声明)
- [引用](#引用)
## **ConFlow**
**ConFlow** 是一个支持迁移TEE安全计算到NNA(Neural Network Accelerators,神经网络加速器)的神经网络计算框架,可自动将线性算子以密文形式迁移到各类神经网络加速器上,例如GPU、NPU、TPU等,在保证数据以密文形式计算的同时,利用各类NNA的算力提高性能;同时,为了减少开发者对此类密文网络结构的适配难度,该框架自带一个模型描述语言的编译器,能将TensorFlow的模型描述自动编译到此框架的密态模型结构上。
- 🏆 **隐私保护**:线性算子是以密文形式在NNA上进行计算,算力提供方将无法访问到明文;而非线性算子运行在TEE内,算力提供方也无法访问到明文。整个流程可以保护隐私数据被泄露。
- 🔥 **性能表现**:相比较普通的TEE框架,本项目可以支持NNA的算力加速,实际吞吐量会比单独的CPU TEE执行明显要高。
### 性能评估 <!-- omit in toc -->
在多个模型和框架上的详细评测结果。
<div align="center">
<img src="./images/eval.jpg" width="95%" />
</div>
### 典型示例 <!-- omit in toc -->
我们将 ConFlow 应用在 Imagenet 数据集上,并存储模型中间的特征图。
<div align="center">
<img src="./images/eval2.jpg" width="95%" />
</div>
## **环境**
需要预先安装好 TensorFlow (version > 2.0) 与 Intel SGX 相关软硬件
## **代码结构**
- **en_ops** :
- Makefile
- *.cc : TensorFlow自定义算子的C++文件。
- **sgx_tf_ops** :
- App :加密算子与外界的接口
- Enclave:加密算子的真正实现文件
- Makefile
- **model_demo** :
- DeepCrossing:DeepCrossing模型,将输入进行加密,并将非线性算子替换为加密算子。
- **operators.py** : 加密算子的python包装层,模型可直接使用import operators进行使用。
- **run.sh** : 配置及编译脚本
## 使用指南
### **安装**
1. 克隆我们的仓库并跳转到相应目录
```bash
git clone git@github.com:Nightliver/privacy_tf.git
cd privacy_tf
```
2. 配置密钥及编译算子
```bash
sh run.sh #运行脚本
请输入扩大倍数: #输入一个数字,该数字为数据加密后扩大的倍数
请输入抽取次数: #输入一个数字,该数字为数据加密时随机抽取的次数,例如5,为从随机生成的数据中抽取5次。
请输入密钥:#输入一串数字,用‘,’分隔开,数字数目应与抽取次数相同
#等待编译完成即可。en_ops文件夹下会出现ops.so,sgx_tf_ops文件夹下会出现sgx.so文件。
```
### **简单使用及测试**
测试脚本
```python
import tensorflow.compat.v1 as tf #导入tensorflow包
import operators as sgx #导入加密算子包
sgx.create_enclave() #进行可信执行环境的初始化
a = tf.constant([[2,3,4],[-1,-2,0]],tf.float32) #创建一个tensor
b = sgx.en_crypt(a) #对tensor进行加密处理
c = sgx.e_relu(b) #对加密的tensor进行加密的relu运算
d = sgx.de_crypt(c) #对加密计算的结果进行解密
```
### **运行模型**
```bash
cd ./model_demo/Deepcrossing #进入模型目录
python train.py #运行加密模型
```
## 开发自定义算子
### **自定义SGX算子**
利用TEE框架自定义SGX算子的示例
1. 在./sgx_tf_ops/Enclave/Enclave.cpp中实现具体计算逻辑:
```c++
void ecall_test(float *input,int N,float *output){
}
void ecall_test_grad(float *input,float *grad,int N,float *output){
}
```
3. 在./sgx_tf_ops/Enclave/Enclave.edl中的trusted 中注册:
```
public void ecall_test([user_check]float* input,int N,[user_check]float* output);
public void ecall_test_grad([user_check]float* input,[user_check]float* grad,int N,[user_check]float* output);
```
5. 在./sgx_tf_ops/App/App.h中写接口的头文件:
```c++
void test(unsigned long int eid,float *input, int N, float *output);
void test_grad(unsigned long int eid,float *input,float* grad, int N, float *output);
```
7. 在./sgx_tf_ops/App/App.cpp中写接口函数:
```c++
void test(unsigned long int eid,float *input, int N, float *output){
sgx_status_t ret = ecall_test(eid, input,N,output);
if (ret != SGX_SUCCESS) {
print_error_message(ret);
throw ret;
}
}
void test_grad(unsigned long int eid,float *input,float *grad, int N, float *output){
sgx_status_t ret = ecall_test_grad(eid, input,grad,N,output);
if (ret != SGX_SUCCESS) {
print_error_message(ret);
throw ret;
}
}
```
9. 最后重新make编译即可。
### **自定义TF算子:**
构建自定义TensorFlow算子的示例
1. 在./en_ops文件夹下新建test.cc文件
2. 导入头文件以及设置命名空间:
```c++
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/framework/shape_inference.h"
#include <cstdlib>
#include <iostream>
#include <cmath>
#include <dlfcn.h>
using namespace tensorflow;
using namespace shape_inference;
using namespace std;
```
4. 注册ops
```c++
REGISTER_OP("ETest")
.Input("input: float")
.Attr("eid_low: int")
.Attr("eid_high: int")
.Attr("times:int")
.Output("output: float")
.SetShapeFn([](::tensorflow::shape_inference::InferenceContext* c) {
c->set_output(0, c->input(0)); //形状设定
return Status::OK();
});
REGISTER_OP("ETestGrad")
.Input("grad: float")
.Input("input: float")
.Attr("eid_low: int")
.Attr("eid_high: int")
.Attr("times:int")
.Output("output: float");
```
6. 注册kernels
```c++
class ETestOp : public OpKernel {
public:
explicit ETestOp(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("eid_low", &eid_low_));
OP_REQUIRES_OK(context, context->GetAttr("eid_high", &eid_high_));
OP_REQUIRES_OK(context, context->GetAttr("times", ×_));
lib = dlopen("(所在路径)/sgx.so",RTLD_LAZY);
OP_REQUIRES(context, lib != NULL, errors::Unknown("Unable to load sgx.so!"));
}
void Compute(OpKernelContext* context) override {
const Tensor& input = context->input(0); //获取输入
auto input_flat = input.flat<float>();
const TensorShape& input_shape = input.shape(); //获取输入形状
Tensor* output = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, input_shape, &output)); //初始化输出
auto output_flat = output->flat<float>();
int N = input_flat.size()/times_;
unsigned long int eid_ = (eid_high_ << 32) + eid_low_;
typedef void (*function)(unsigned long int eid,float* input, int N, float* output);
dlerror();
function test_kernel = (function) dlsym(lib, "test"); //调用SGX算子
const char *dlsym_error = dlerror();
OP_REQUIRES(context, !dlsym_error, errors::Unknown("loading of test failed: ", dlsym_error)); //失败处理
test_kernel(eid_,(float*)input_flat.data(),N,(float*)output_flat.data());//调用SGX进行计算
};
private:
void* lib;
int64 eid_low_;
int64 eid_high_;
int64 times_;
};
REGISTER_KERNEL_BUILDER(Name("ETest").Device(DEVICE_CPU), ETestOp);
class ETestGradOp : public OpKernel {
public:
explicit ETestGradOp(OpKernelConstruction* context) : OpKernel(context) {
OP_REQUIRES_OK(context, context->GetAttr("eid_low", &eid_low_));
OP_REQUIRES_OK(context, context->GetAttr("eid_high", &eid_high_));
OP_REQUIRES_OK(context, context->GetAttr("times", ×_));
lib = dlopen("(所在路径)/sgx.so",RTLD_LAZY);
OP_REQUIRES(context, lib != NULL, errors::Unknown("Unable to load sgx.so!"));
}
void Compute(OpKernelContext* context) override {
const Tensor& grad = context->input(0);
const Tensor& input = context->input(1);
auto grad_flat = grad.flat<float>();
auto input_flat = input.flat<float>();
// check shapes of input
const TensorShape& input_shape = input.shape();
// create output tensor
Tensor* output = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, input_shape, &output));
auto output_flat = output->flat<float>();
const int N = input_flat.size()/times_;
unsigned long int eid_ = (eid_high_ << 32) + eid_low_;
typedef void (*function)(unsigned long int eid,float* input,float *grad, int N, float* output);
dlerror();
function test_grad_kernel = (function) dlsym(lib, "test_grad");
const char *dlsym_error = dlerror();
OP_REQUIRES(context, !dlsym_error, errors::Unknown("loading of relu_grad failed: ", dlsym_error));
test_grad_kernel(eid_,(float*)input_flat.data(),(float*)grad_flat.data(),N,(float*)output_flat.data());
};
private:
void* lib;
int64 eid_low_;
int64 eid_high_;
int64 times_;
};
REGISTER_KERNEL_BUILDER(Name("ETestGrad").Device(DEVICE_CPU), ETestGradOp);
```
8. 使用make重新编译
9. python层包装及梯度注册
10. 在operators.py中进行函数包装和梯度注册:
```python
def e_test(inputs):
global eid,times
return trainer_ops.e_test(inputs,eid_low=(eid&0xFFFFFFFF),eid_high=(eid>>32),times=times)
@ops.RegisterGradient("ETest")
def _e_test_grad(op, grad):
with tf.name_scope("ETestGrad"), tf.xla.experimental.jit_scope(compile_ops=False):
return trainer_ops.e_test_grad(grad,op.inputs[0],eid_low=op.get_attr("eid_low"), eid_high=op.get_attr("eid_high"),times = op.get_attr("times"))
```
### **使用**
使用自定义的加密算子的示例
```python
import tensorflow as tf
import operators
operators.create_enclave() #初始化enclave
operators.e_test() #使用自定义算子
```
## 协议 <!-- omit in toc -->
* 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) 协议开源,并遵照本项目相关声明。
## 声明 <!-- omit in toc -->
* 直接或间接使用本项目原创的相关方法得到学术与商业等利益,项目原作者拥有知情权。
## 引用
如果您觉得我们模型/代码/论文有帮助,请给我们 ⭐ 和 引用 📝,感谢!
```bib
@INPROCEEDINGS{10247964,
author={Li, Qiushi and Ren, Ju and Zhang, Yan and Song, Chengru and Liao, Yiqiao and Zhang, Yaoxue},
booktitle={2023 60th ACM/IEEE Design Automation Conference (DAC)},
title={Privacy-Preserving DNN Training with Prefetched Meta-Keys on Heterogeneous Neural Network Accelerators},
year={2023},
volume={},
number={},
pages={1-6},
keywords={Training;Privacy;Data privacy;Design automation;Prefetching;Artificial neural networks;Resists;deep learning;privacy preserving;neural network accelerator;cloud computing;trusted execution environment},
doi={10.1109/DAC56929.2023.10247964}}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.