Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
abujazar
7 months ago
|
parent
|
context
|
favorite
| on:
Mistral ships Le Chat – enterprise AI assistant th...
Actually you shouldn't be running LLMs in Docker on Mac because it doesn't have GPU support. So the larger models will be extremely slow if they'll even produce a single token.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: