I haven’t get an answer and haven’t give up. Then I think the first comment by is no longer valid, MacOSX with capable GPU Cuda capable exists today… and is possible to have Python to rely on Cuda using Numba project… although is another story for how to integrate this into FastAI infrastructure any time soon… but the idea I’m trying to transmit is that this bridge is feasible… but not sure how to approach… maybe can give more advise up to date with this? Numba supports compilation of Python to run on either CPU or GPU hardware, and is designed to integrate with the Python scientific software stack.” Through this approach using, Numba from their mainpage: “ Numba works by generating optimized machine code using the LLVM compiler infrastructure at import time, runtime, or statically (using the included pycc tool).
0 Comments
Leave a Reply. |