I wrote a short python script that generates prime numbers by using the so-called 'Sieve of Erastosthenes'. I won't bother with explaining the concept behind it as it's quite simple. I have a dilemma, though: the script I wrote is pretty well-optimized, but it's still too slow for my liking. I decided I, seeing as it's such a repetitive operation, should try my hand at GPU implementation. My question is: "Is GPU implementation in such a high level language even worth considering?" if it isn't, I'll have to teach myself C or something, rewrite the software and then implement my video card, which is quite a hassle but totally worth it because prime numbers are cool.