Active open source contributor, interested in LLMs, PL, and its related infra/system.
Highlights
Pinned Loading
-
-
TUDB-Labs/MixLoRA
TUDB-Labs/MixLoRA PublicState-of-the-art Parameter-Efficient MoE Fine-tuning Method
-
TUDB-Labs/mLoRA
TUDB-Labs/mLoRA PublicAn Efficient "Factory" to Build Multiple LoRA Adapters
-
scu-covariant/MoE-PEFT
scu-covariant/MoE-PEFT PublicForked from TUDB-Labs/MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
Python 1
-
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.





