Is there a way to use Python GPU for ECC speed up?

2 Answer

_Counselor

I don't know any ready-to-use 256bit number numpy libraries, but it is possible to create one, using 64 or 32bit numbers for math operations.You cannot just speed up individual operations like point multiplication by using GPU, because single CUDA core is much slower than CPU. You need to divide full computing work into many independent tasks which will run in parallel in order to get the performance gain.

Comment (3)

SamYezi2022-10-17 18:50

What you are suggesting is that it is just easier to use C++ with boost, while simultaneously implementing a multithreading approach, and everything would be running on a CPU?I mean it could be a better way to think about it, since I also want to port that on laptop or other devices that don't have either a GPU or NVIDIA tool kit installed

NotATether2022-10-21 00:58

Why the need for Boost? I wouldn't use any of its libraries inside performance-intensive loops, but for things like Program Options that run only once, then it's fine.Boost is known to compromise speed if it makes the interface cleaner.

NotATether2022-10-17 08:23

I guess you could try using an that computes multiple point multiplications at once - incrementally, not using threads or CUDA cores. This will safe you time as long as you only batch multiply as many points as it takes to do (according to the paper) 5 serial ECmults.

NotATether

You have to make your own "fixed width" decimal class that represents numbers in Base-2 notation if you want to implement some kind of support for 256-bit data.I have a C++ (not python) fixed-width class, but it's in Base-10, sorry.

Download APPAdd WeChat
Community