Support LoRA Adapters #6
iwr-redmond
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
It would be helpful to be able to match particular groups of agents with LoRAs, as this would allow the same base model to be used in different ways while also helping to reduce the sorts of repetitiveness observed by Chung et al in their recent paper (2025, Table 1) (even in very large models).
Taking the example of Llama 3.1 from #4, here are some of the other LoRAs that could be used with the same base model:
Taking such an approach would allow a resource-constrained computer to generate rich and diverse data - Llama 3.1 8B Q4_K_M uses around 5Gb of VRAM - including content that would be, very necessarily!, disallowed when using moderated APIs.
Beta Was this translation helpful? Give feedback.
All reactions