Stick Page Forums Archive

Programming question

Started by: MaxZ | Replies: 1 | Views: 323

MaxZ
2

Posts: 2,919
Joined: Dec 2008
Rep: 10

View Profile
Jun 28, 2013 4:02 PM #1021887
I wrote a short python script that generates prime numbers by using the so-called 'Sieve of Erastosthenes'. I won't bother with explaining the concept behind it as it's quite simple. I have a dilemma, though: the script I wrote is pretty well-optimized, but it's still too slow for my liking. I decided I, seeing as it's such a repetitive operation, should try my hand at GPU implementation. My question is: "Is GPU implementation in such a high level language even worth considering?" if it isn't, I'll have to teach myself C or something, rewrite the software and then implement my video card, which is quite a hassle but totally worth it because prime numbers are cool.
The Organization
2

Posts: 3,475
Joined: Jun 2013
Rep: 10

View Profile
Jun 28, 2013 4:19 PM #1021899
You're better off using Cython libraries iMO to put c code in python and learn that, since python is just fancy C
GPU implementation is not standardized, they're specific to the chip-set.
The speed boost you would get from using cython is much more efficient then you would get by using you're GPU in terms of the speed you would get vs the difficulty in setting it up.
There might be some python libs that use your GPU, possibly OpenGL since it does graphic processing but that's up to you.
Website Version: 1.0.4
© 2025 Max Games. All rights reserved.