You are not logged in.
510510 = 17# = primorial(17) = 2*3*5*7*11*13*17.
92160 = euler_phi(17#)
I think I'm with everyone else trying to figure out where Ipn is derived. The obvious way (I'm just making a guess) is that it is the primorial plus the previously generated primes with the single new prime sieved out. But that would make the primorial redundant. Then it reduces to "just use this sequence, which was made by sieving p (max p <= sqrt(n)) from the previous sequence. Which was made by sieving out prev_prime(p) from its previous sequence, etc."
I think Bobby's response to the earlier question does kind of get to the point. For the small 15 digit example you have to generate a 4.6 million digit primorial, then somehow come up with the sequence. Take a prime in the range of 2^512, which would be useful for an old RSA key. These take APR-CL 0.1 second to prove on a fast computer, or 200uS for BPSW (not proven, but it would be big news if it was composite since you'd be the first one to find a counterexample). For your method we would need to generate primorial(about 10^77) to get started. Ouch.
Taking a stab at your earlier question #2, which I will take as generating primes in a range, here are some ideas. If you just want something to run, look for primesieve. If you're looking for prime counts, there are fast algorithms that don't involve generating primes, so that's a different question.
1, either a very small range, or a very large base (e.g. well over 10^20). Pick one of:
a) Primality test with a good method that quickly throws out obvious composites.
b) Sieve to some convenient limit, then primality test the remainder.
2, the usual case. Use a segmented Sieve of Eratosthenes.
Notes:
- You may find reference to the Sieve of Atkin. If your SoA is faster than your SoE, I will bet you coded the SoE wrong. It's not as egregious as AKS, but it's another case where it sounds good in theory but it's not so hot in practice.
- there are a lot of ways to optimize the sieve internals. It's important to at least start with the correct 4-line SoE, but you can go crazy unrolling, presieving, cache-blocking, parallelizing, etc.
- especially in the large range, Oliveira e Silva's work on fast segmented sieves can be useful. I don't use it, but primesieve does and it is definitely better in some situations.
- For case 1b sieving, for example T.R. Nicely's prime gap verification program does this. Personally I just worked on making the primality test fast for weeding out composites, but partly because it made things simpler. As the range grows the sieving would be more and more advantageous. For cases like next/prev prime, the range is typically small and you're guessing at it, so it's more of a tossup.
- For bases over 3k digits there are some additional ideas.
This is a wheel sieve. See, for example, "Wheel factorization" on Wikipedia. There are some good papers by Pritchard, Quesada & van Pelt, and others. Crandall and Pomerance's book is available online if you don't have a copy, and Section 3.1.2 to 3.1.4 discusses some of this.
A lot of sieves use it to 30, as then one has 8 candidates per 30 numbers, which nicely packs into a byte, and you've got most of the gain. I believe primesieve uses it to 210. It starts getting unwieldy before long for very little additional gain. At some point, especially given caches, you're better off doing the extra small mod (or adding one more number to GCD if using bigints) than walking a giant table. Oh, another thing a lot of sieves do is a presieve of the range -- take the wheel-30 as an example, now make a constant memory chunk of size 7*11*13 (only 1001 bytes), and fill the sieve range with it appropriately rotated. Presto -- clearing the memory for the sieve just filled in all multiples of 7, 11, and 13 without any computation, so you can start sieving at 17.
3. we all know all prime number up to some limit why?
Just because we know the 17M digit numbers 2^57885161-1 is prime doesn't mean we have enumerated all the primes up to that point. Maybe I'm not understanding the question. Given numbers of certain forms (e.g. 2^p-1), we can prove their primality much faster than arbitrary numbers of similar size. For general numbers in the 15-22k digit range, it just takes a fast ~$1000US computer, a copy of Primo, and 3-5 months of letting it crunch. We're limited in that most people wouldn't want to wait that long just to get a proof for a number that we knew in a minute or two was almost certainly prime via probable prime tests.
Computers have been getting faster of course, but also gradual advances in fast multiprecision math software in the last few decades. For proofs, APR-CL and ECPP were enormous advances, though had no effect on the records for the largest proven primes, since they were all of special form. I don't follow the Mersenne prime searches so can't elaborate on advances (other than distributed searches, computer speed improvements, and faster math libraries).
next ASK is limited to 1000 digits what about in 10^6 digits and latest records digits.
My Aim on records.
For records, first look at something like the Prime Pages' Top Twenty and all their information about record primes.
Finding probable primes in a range is an interesting task to do quickly. The fastest way I know today for numbers in the 10^6 digit range is presieving then using OpenPFGW on the remaining set. This isn't going to set any records however.
For primality proofs, at 10^6 digits, I don't think general proofs like N-1, AKS, APR-CL, and ECPP will work. The only way those will work is back-construction of something using the proof (basically the same method Maurer's random provable prime technique uses), which means you've found a giant provable prime but you didn't really get to pick it. The largest proof using these methods is currently 26k digits using ECPP and took many months using custom software. The time taken grows approximately as the 4th power of the number of digits (but in practice it slows down at the extreme end).
This is rather complicated because some people are looking for answers of tiny numbers, e.g. under 10 thousand. Some for just small integers like 30608076202112141 (e.g. under 2^64). Others for 100-, 1000-, 10000-, 100000-digit numbers. Each of these has a different best strategy, not to mention numbers of a special form (e.g. Mersenne numbers). Apologies if this reply goes down too many ratholes.
Quick solution for most people, with numbers under 5000 or so digits:
- fire up Pari/GP. Type ispseudoprime(####). (#### is your number).
- for a proof, use isprime(###) instead. Until the number is hundreds of digits long you may as well do this.
For primality of general numbers, some options:
- Trial division. Works great for tiny numbers, like under 100k. Slows down very quickly. See Crandall and Pomerance section2 3.1.2 -
3.1.4 for discussion of testing odds vs. wheels vs. primes.
- Miller-Rabin with N random bases. Probable prime test, quite fast. For numbers under 2^64 there are deterministic sets that give correct results for all inputs. Simple to implement, works pretty well for most people, though as you care more about the correctness you find N gets quite large.
- BPSW test, about same speed as M-R with 3 bases. Probable prime test, no known counterexamples, the test used by almost all math
packages (sometimes they add an extra M-R test). There are verified *no* false results below 2^64.
- Proof: N-1/N+1. Generally using one of the theorems from the Brillhart-Lehmer-Selfridge 1975 paper. Requires factoring n-1 or n+1 so doesn't scale well, but it's quite fast for "small" numbers (e.g. under 40 digits, depending on your factoring code of course). Pretty simple to understand and implement theorem 5. Can produce a very simple and fast to verify certificate. Not really useful for numbers of 100+ digits. This was Pari's method 10 or so years ago.
- Proof: APR-CL. Name from the author's initials. Complicated to understand and implement. At least two open source versions available (Pari and mpz_aprcl). No certificate. Quite fast -- 100 digit proof in ~ 0.1 seconds, 1000 digits in under 10 minutes. Implementations typically limited to ~7000 digits (there are a lot of precalculated constants). Pari's current method.
- Proof: ECPP. Elliptic Curve Primality Proof. Complicated to understand and implement. Primo is closed source but free to use. There are at least two open source implementations. ECPP has been used to prove many numbers over 10,000 digits. Quite fast -- 100 digit proof in under 0.1 seconds, 1000 digits in under 10 minutes (Primo with multiple cores will go even faster). The big advantage is that you get a primality certificate that can be verified in a fraction of the time, using different code on a different computer if one wants. Hence,
unlike APR-CL and AKS, you're not just being told: "it's prime. trust me. what could go wrong?"
- Proof: AKS. A theoretical breakthrough. Sort of easy to implement (the algorithm from the v6 paper is surprisingly simple, but doing
modular polynomial multiplication correctly and quickly takes effort). Of no practical use because it's insanely slow -- numbers that APR-CL or ECPP prove in single seconds will take years with AKS.
- There are many other probable prime tests (e.g. various Lucas tests by themselves, PFGW's tests, Frobenius) and proofs (Cyclotomy).
My preferred method:
- If number less than threshold (e.g. 10k) do trial division by primes to sqrt(n). Result is correct.
- Test with some small primes to quickly weed out composites. I test to 53, most people test to less. With GMP and large numbers, I use a big GCD.
- If number less than 2^64, do deterministic M-R or BPSW. Result is correct.
- Do BPSW test. Pretty much every composite is now quickly weeded out.
There are no known counterexamples but we know some exist. If you don't absolutely need a proof, you're done. If you're doing crypto and don't want a proof, add a Frobenius test and/or some M-R tests using random bases. If the number passed BPSW and you want a proof, continue.
- If you have a good factoring library, do a quick attempt at BLS75 theorem 5. Unlikely to work with very large numbers, but with just a
little effort at Pollard-Rho and P-1, it can prove a surprisingly large percentage of 128-bit numbers in a few milliseconds. Worst case you
move on after spending a few milliseconds.
- ECPP. Send resulting certificate through verification if you're paranoid. You could use APR-CL instead.
There are some additional complications if one wants the fastest results at various sizes (e.g. with GMP and large numbers, a big GCD is fast so do small prime testing farther, and with multi-thousand digit numbers a tree sieve is nice). I find the extra strong Lucas test to be the best choice for BPSW's Lucas test, other software makes different choices. Speeding things up for native integers is fun (asm mulmods, Montgomery math, etc.).