Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ π -¹ ² ³ °

You are not logged in.

- Topics: Active | Unanswered

Pages: **1**

Many more, alternate methods, and examples: Wikipedia: Divisibility rule

(p.s. the talk at YAPC was by me, so we have a little Perl representation)

I didn't know I was supposed to ask a question... Will we have a proof of the Riemann Hypothesis by 2059?

Indeed there are.

There was even a "Perl and Scientific Programming" panel at YAPC this year, and someone gave a lightning talk about a Perl number theory package.

Sorry for not including the link. It was here as well: http://www.mathisfunforum.com/viewtopic … 75#p273775.

Re implementation, Darn -- I was hoping to find something new! Thanks for looking.

**danaj**- Replies: 3

A year ago, bobbym wrote:

bobbym wrote:

They are right about AKS, it takes about an hour to prove the primality of a 1000 digit number on an old desktop.

Can you point me to the software you're using? This is *much* faster than any AKS implementation I am aware of. It's millions of times faster than stated performance numbers in papers about AKS implementations, e.g.

- Crandall and Papadopoulos (2003) "A 30-decimal-digit prime this took, [...], about a day to be resolved."

- Rotella (2005), "An Efficient Implementation of [AKS]" takes a minute for 4 digit numbers, and ends with "[...]the AKS algorithm, while it is elegant and relatively simple, should be viewed as an algorithmic curiosity rather than an algorithm to be used for actual primality proving."

- Salembier and Southerington (2006) show a time of 30 minutes for a 15 digit number.

- Brent (2010) estimates 840 *years* for a 308 digit prime using a version with some improvements.

My code/machine is running about 200 times faster than Brent's numbers, and I come up with en estimate of about 4200 years for a 1000 digit number. APR-CL and ECPP are about 2-5 minutes (single threaded) for 1k digits on the same computer.

There are plenty of other not-uncommonly-discussed sets as well, e.g. Safe, Sophie Germain, Lucas, Fibonacci, Lucky, Palindromic, Pillai, Good, Cuban (y+1, y+2), Primorial (+1, -1), Euclid, Circular, Panaitopol. (Re digressing about Euclid and Primorial, one wants to distinguish between P#+/-1 and Pn#+/-1 -- that is primes up to P or the Pn-th prime). I'm sure one could come up with another hundred sets. Surveying OEIS would bring up lots of them, for example.

For intersections, I found a useful practical method to be listing the allowable subsets of n mod 210 for each type. Some don't have a pattern, but many do (e.g. cuban1, cuban2, twin, triplet, quadruplet, cousin, sexy, safe, sophie, panaitopol to name some). Not only does this help for single cases, but one can then do the intersection of the two (or 3 or more) types. Sometimes this yields a null set once past tiny numbers (e.g. quadruplet cuban primes, or safe quadruplets other than 5 and 11).

This doesn't really help answer the original questions however. For my reading of the first question, certainly we can find examples where sets intersect, e.g. twin sexy primes, or titanic cousin primes (e.g. 10^1000 + 744843).

I believe the answer to your second question is no in general. In some of the cases, the number of intersections from A to B between two sets will not necessarily follow the same trend as between A to B+delta. In other cases (what I believe most people are discussing) both sets will be infinite, though maybe it would be possible to prove that the number of elements of set P in an interval is always greater than the number of elements of set Q in the same interval, or compare asymptotic densities, or something along those lines (e.g. the density of titanic primes in an interval starting after 10^1000 is going to be at least as dense as any of the restricted sets in the same interval).

I'm not sure if I should post, (1) it is somewhat necro-posting, (2) there seems to be another conversation going on about DH, and (3) I type way too much. But regarding generating large primes, this is a good paper about generating n-bit random primes: "Close to Uniform Prime Number Generation With Fewer Random Bits" by Fouque and Tibouchi available at eprint.iacr.org/2011/481. For crypto, you may want to also look at FIPS 186-4.

PRIMEINC: generate a random number in the range, run next_prime on it. Simple and fast, but bad distribution.

Trivial: Aka Monte Carlo method as in post 3. Perfectly uniform, but uses excessive randomness and is very slow for large sizes. A comment about the post 3 code: for sizes over 2 bits, you'd want to take n-2 random bits and then set bits n-1 and 0 -- there is no need to waste time and entropy testing even numbers. A well written test will return almost instantly, but you wasted time getting all those bits -- if you're on an isolated headless server using /dev/random you may wait hours to get the next set.

Modified: Methods like Fouque and Tibouchi A1 or A2; Joye-Paillier, etc. I use a somewhat similar method for odd ranges (e.g. for ranges not a power of 2). These sacrifice some uniformity for big increases in speed and less entropy consumption. (the entropy consumption may or may not matter to you, but sometimes it is important).

Provable primes, typically Shawe-Taylor or Maurer's FastPrime. Common for crypto use. You can find Shawe-Taylor in FIPS 186-4; Maurer's algorithm in his publicly available paper, Menezes's book, or various open source implementations. These work by generating a small random prime using the trivial method, then generating a larger one by iterating with additional random input until proven with either Pocklington or its improved version from BLS theorem 3. Then recurse to make successively larger primes. They only generate a subset of the primes in the range (10% for Maurer's FastPrime, substantially less for Shawe-Taylor) but that typically doesn't matter.

openssl or other software. Good and bad. OpenSSL likely doesn't make some mistakes you may make. But you have to make sure your platform has a recent version, your program actually calls the right executable, you send it the right arguments, you interpret the output and exceptions properly, you're ok with its decisions on randomness sources, you're OK with it not meeting FIPS 186-4 standards for primality testing, and know it is slower for sizes > 1024 than what can be done with GMP meeting the standards.

There are lots of other considerations. E.g. where are you getting your randomness? Are you using a good enough primality test for your purpose? Do you need provable primes? Strong primes? Writing your own code is fun, but may have bugs -- do lots of testing.

Whether these are "quick" or not depends on the size, which method you choose, your implementations, and what you think quick means. For 1024-bit primes, my older machine using Perl code generates them in 0.06s each (F&T algorithm 1, ISAAC seeded from /dev/random, BPSW + 1 M-R probable primes). Add 0.006s for 3 additional random M-R plus a Frobenius test. Add not much more for enough extra M-R tests to make FIPS 186-4 happy. For smaller sizes sometimes running a primality proof on the n-bit prime is faster than a constructive method, but your mileage may vary. At 1024 bits, my Maurer routine takes 0.65s, while Shawe-Taylor using FIPS 186-4 + SHA256 takes 0.24s. The generation code is in Perl so it could be faster, but the heavy work ends up being in C+GMP.

In the example for "is 281 prime" you did:

1. find squre root of number near about it is 17

2 17*=Pn*=510510

3 Ip17 =number not divisible by 2,3,5,7,9,...17

then: prime= Ip17-Pn* where IP17 started at 510513, and Pn* was 510510.

For the example "111,111,111,111,111" (decimal), I get a square root of 10540925. Step 2 as well as the prime generation says to use Pn*=10530925#. That came out to 4,576,061 digits for me. The number in "No of Ipn sub series" is going to be similarly titanic. What is different about this example from the 281 example?

You have to either store these or calculate them, and either way I don't see how you can say "don't worry about time [...] to find them." Why not just precalculate the primes directly if we get to discount the time and/or space?

510510 = 17# = primorial(17) = 2*3*5*7*11*13*17.

92160 = euler_phi(17#)

I think I'm with everyone else trying to figure out where Ipn is derived. The obvious way (I'm just making a guess) is that it is the primorial plus the previously generated primes with the single new prime sieved out. But that would make the primorial redundant. Then it reduces to "just use this sequence, which was made by sieving p (max p <= sqrt(n)) from the previous sequence. Which was made by sieving out prev_prime(p) from its previous sequence, etc."

I think Bobby's response to the earlier question does kind of get to the point. For the small 15 digit example you have to generate a 4.6 million digit primorial, then somehow come up with the sequence. Take a prime in the range of 2^512, which would be useful for an old RSA key. These take APR-CL 0.1 second to prove on a fast computer, or 200uS for BPSW (not proven, but it would be big news if it was composite since you'd be the first one to find a counterexample). For your method we would need to generate primorial(about 10^77) to get started. Ouch.

Taking a stab at your earlier question #2, which I will take as generating primes in a range, here are some ideas. If you just want something to run, look for primesieve. If you're looking for prime counts, there are fast algorithms that don't involve generating primes, so that's a different question.

1, either a very small range, or a very large base (e.g. well over 10^20). Pick one of:

a) Primality test with a good method that quickly throws out obvious composites.

b) Sieve to some convenient limit, then primality test the remainder.

2, the usual case. Use a segmented Sieve of Eratosthenes.

Notes:

- You may find reference to the Sieve of Atkin. If your SoA is faster than your SoE, I will bet you coded the SoE wrong. It's not as egregious as AKS, but it's another case where it sounds good in theory but it's not so hot in practice.

- there are a lot of ways to optimize the sieve internals. It's important to at least start with the correct 4-line SoE, but you can go crazy unrolling, presieving, cache-blocking, parallelizing, etc.

- especially in the large range, Oliveira e Silva's work on fast segmented sieves can be useful. I don't use it, but primesieve does and it is definitely better in some situations.

- For case 1b sieving, for example T.R. Nicely's prime gap verification program does this. Personally I just worked on making the primality test fast for weeding out composites, but partly because it made things simpler. As the range grows the sieving would be more and more advantageous. For cases like next/prev prime, the range is typically small and you're guessing at it, so it's more of a tossup.

- For bases over 3k digits there are some additional ideas.

This is a wheel sieve. See, for example, "Wheel factorization" on Wikipedia. There are some good papers by Pritchard, Quesada & van Pelt, and others. Crandall and Pomerance's book is available online if you don't have a copy, and Section 3.1.2 to 3.1.4 discusses some of this.

A lot of sieves use it to 30, as then one has 8 candidates per 30 numbers, which nicely packs into a byte, and you've got most of the gain. I believe primesieve uses it to 210. It starts getting unwieldy before long for very little additional gain. At some point, especially given caches, you're better off doing the extra small mod (or adding one more number to GCD if using bigints) than walking a giant table. Oh, another thing a lot of sieves do is a presieve of the range -- take the wheel-30 as an example, now make a constant memory chunk of size 7*11*13 (only 1001 bytes), and fill the sieve range with it appropriately rotated. Presto -- clearing the memory for the sieve just filled in all multiples of 7, 11, and 13 without any computation, so you can start sieving at 17.

satish kumar singh wrote:

3. we all know all prime number up to some limit why?

Just because we know the 17M digit numbers 2^57885161-1 is prime doesn't mean we have enumerated all the primes up to that point. Maybe I'm not understanding the question. Given numbers of certain forms (e.g. 2^p-1), we can prove their primality much faster than arbitrary numbers of similar size. For general numbers in the 15-22k digit range, it just takes a fast ~$1000US computer, a copy of Primo, and 3-5 months of letting it crunch. We're limited in that most people wouldn't want to wait that long just to get a proof for a number that we knew in a minute or two was almost certainly prime via probable prime tests.

Computers have been getting faster of course, but also gradual advances in fast multiprecision math software in the last few decades. For proofs, APR-CL and ECPP were *enormous* advances, though had no effect on the records for the largest proven primes, since they were all of special form. I don't follow the Mersenne prime searches so can't elaborate on advances (other than distributed searches, computer speed improvements, and faster math libraries).

satish kumar singh wrote:

next ASK is limited to 1000 digits what about in 10^6 digits and latest records digits.

My Aim on records.

For records, first look at something like the Prime Pages' Top Twenty and all their information about record primes.

Finding probable primes in a range is an interesting task to do quickly. The fastest way I know today for numbers in the 10^6 digit range is presieving then using OpenPFGW on the remaining set. This isn't going to set any records however.

For primality proofs, at 10^6 digits, I don't think general proofs like N-1, AKS, APR-CL, and ECPP will work. The only way those will work is back-construction of something using the proof (basically the same method Maurer's random provable prime technique uses), which means you've found a giant provable prime but you didn't really get to pick it. The largest proof using these methods is currently 26k digits using ECPP and took many months using custom software. The time taken grows approximately as the 4th power of the number of digits (but in practice it slows down at the extreme end).

This is rather complicated because some people are looking for answers of tiny numbers, e.g. under 10 thousand. Some for just small integers like 30608076202112141 (e.g. under 2^64). Others for 100-, 1000-, 10000-, 100000-digit numbers. Each of these has a different best strategy, not to mention numbers of a special form (e.g. Mersenne numbers). Apologies if this reply goes down too many ratholes.

Quick solution for most people, with numbers under 5000 or so digits:

- fire up Pari/GP. Type ispseudoprime(####). (#### is your number).

- for a proof, use isprime(###) instead. Until the number is hundreds of digits long you may as well do this.

For primality of general numbers, some options:

- Trial division. Works great for tiny numbers, like under 100k. Slows down very quickly. See Crandall and Pomerance section2 3.1.2 -

3.1.4 for discussion of testing odds vs. wheels vs. primes.

- Miller-Rabin with N random bases. Probable prime test, quite fast. For numbers under 2^64 there are deterministic sets that give correct results for all inputs. Simple to implement, works pretty well for most people, though as you care more about the correctness you find N gets quite large.

- BPSW test, about same speed as M-R with 3 bases. Probable prime test, no known counterexamples, the test used by almost all math

packages (sometimes they add an extra M-R test). There are verified *no* false results below 2^64.

- Proof: N-1/N+1. Generally using one of the theorems from the Brillhart-Lehmer-Selfridge 1975 paper. Requires factoring n-1 or n+1 so doesn't scale well, but it's quite fast for "small" numbers (e.g. under 40 digits, depending on your factoring code of course). Pretty simple to understand and implement theorem 5. Can produce a very simple and fast to verify certificate. Not really useful for numbers of 100+ digits. This was Pari's method 10 or so years ago.

- Proof: APR-CL. Name from the author's initials. Complicated to understand and implement. At least two open source versions available (Pari and mpz_aprcl). No certificate. Quite fast -- 100 digit proof in ~ 0.1 seconds, 1000 digits in under 10 minutes. Implementations typically limited to ~7000 digits (there are a lot of precalculated constants). Pari's current method.

- Proof: ECPP. Elliptic Curve Primality Proof. Complicated to understand and implement. Primo is closed source but free to use. There are at least two open source implementations. ECPP has been used to prove many numbers over 10,000 digits. Quite fast -- 100 digit proof in under 0.1 seconds, 1000 digits in under 10 minutes (Primo with multiple cores will go even faster). The big advantage is that you get a primality certificate that can be verified in a fraction of the time, using different code on a different computer if one wants. Hence,

unlike APR-CL and AKS, you're not just being told: "it's prime. trust me. what could go wrong?"

- Proof: AKS. A theoretical breakthrough. Sort of easy to implement (the algorithm from the v6 paper is surprisingly simple, but doing

modular polynomial multiplication correctly and quickly takes effort). Of no practical use because it's insanely slow -- numbers that APR-CL or ECPP prove in single seconds will take years with AKS.

- There are many other probable prime tests (e.g. various Lucas tests by themselves, PFGW's tests, Frobenius) and proofs (Cyclotomy).

My preferred method:

- If number less than threshold (e.g. 10k) do trial division by primes to sqrt(n). Result is correct.

- Test with some small primes to quickly weed out composites. I test to 53, most people test to less. With GMP and large numbers, I use a big GCD.

- If number less than 2^64, do deterministic M-R or BPSW. Result is correct.

- Do BPSW test. Pretty much every composite is now quickly weeded out.

There are no known counterexamples but we know some exist. If you don't absolutely need a proof, you're done. If you're doing crypto and don't want a proof, add a Frobenius test and/or some M-R tests using random bases. If the number passed BPSW and you want a proof, continue.

- If you have a good factoring library, do a quick attempt at BLS75 theorem 5. Unlikely to work with very large numbers, but with just a

little effort at Pollard-Rho and P-1, it can prove a surprisingly large percentage of 128-bit numbers in a few milliseconds. Worst case you

move on after spending a few milliseconds.

- ECPP. Send resulting certificate through verification if you're paranoid. You could use APR-CL instead.

There are some additional complications if one wants the fastest results at various sizes (e.g. with GMP and large numbers, a big GCD is fast so do small prime testing farther, and with multi-thousand digit numbers a tree sieve is nice). I find the extra strong Lucas test to be the best choice for BPSW's Lucas test, other software makes different choices. Speeding things up for native integers is fun (asm mulmods, Montgomery math, etc.).

Pages: **1**