Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local Models with ollama #3

Open
haesleinhuepf opened this issue Nov 11, 2024 · 2 comments
Open

Local Models with ollama #3

haesleinhuepf opened this issue Nov 11, 2024 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@haesleinhuepf
Copy link
Member

Hi @lea-33 ,

how about introducing another LLM endpoint: ollama? There were recently new vision-models published, namely llama3.2-vision 11B and 90B. Does the 11B version work in your environment? If so, we could overcome rate-limits by using local models.

@lea-33 lea-33 self-assigned this Nov 14, 2024
@lea-33 lea-33 added the enhancement New feature or request label Nov 15, 2024
@lea-33
Copy link
Collaborator

lea-33 commented Nov 19, 2024

Hi @haesleinhuepf,
sadly, it is not working on my Laptop. Seems like I don't have enough Memory to do so [Error: model requires more system memory (11.4 GiB) than is available (9.9 GiB)]..

@haesleinhuepf
Copy link
Member Author

Ok, then it might be worth exploring how to run it on clara/paula in the compute center... ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants