Replies: 3 comments 6 replies
-
I am facing a similar problem when building llama.cpp for CPU in the jetson or nano. Lllama.cpp can not be built for Cortex a-78AE CPU. |
Beta Was this translation helpful? Give feedback.
-
I don't have this platform; however, it would be much more helpful if you guys would post the exact commands you used to try to compile and run llama.cpp on your respective platform(s). 🤔 Did you follow the instructions from https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md#cuda
|
Beta Was this translation helpful? Give feedback.
-
Thank you. I have solved it by updating the camke
…---- Replied Message ----
| From | ***@***.***> |
| Date | 12/23/2024 07:56 |
| To | ***@***.***> |
| Cc | ***@***.***>***@***.***> |
| Subject | Re: [ggerganov/llama.cpp] Llamacpp compile failed on Jetson Orin Nano (8GB) (Discussion #10545) |
I don't have this platform; however, it would be much more helpful if you guys would post the exact commands you used to try to compile and run llama.cpp on your respective platform(s). 🤔
Did you follow the instructions from https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md#cuda
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
My nano (8g) is flashed with jetpack6.0 (cuda=12.2, gcc=11.4). When I compile the source code of llama.cpp (with cuda) on orin nano, the following error occurs. Has anyone successfully compiled on nano? Does anyone know how to solve this error?
Beta Was this translation helpful? Give feedback.
All reactions