main thread

How should I return the proportion of g's in the last part of the question - fraction, precentage, or the actual number?

For example, if my g's are 100 and times=400, should it be 0.25, 25 or 100?

You should return the tuple

(p1,p2)

such that

0<=p1<=1

0<=p2<=1

p1+p2 = 1

.

How efficient should the functions in sections B and C be?

Running **order_safe_prime** takes only a few seconds, but running **stats_ord_secure** with times=10000 takes about 5 minutes with my computer. Are those timings reasonable?

I recommend commenting out the lines in order_safe_prime that check whether the number is a prime, only for the collection of this data. I found that most of the time that the function runs is spent on is_prime. Since here we are given 3 primes, there is no reason to check if p and q are primes. This should make this run in less than a minute.

Thanks!

I've actually done this already: my version of stats_ord_secure doesn't even call order_safe_prime. Instead, it checks the order on its own, without the primality tests in order_safe_prime.

Maybe it's just my computer's old age…

Hi,

I'm countering same problem as described above. Does the running time of the function in 5(3) should take that long? it takes about 3 mins after removing the the primality test or maybe there a bug in my code? cause in 5(2), it runs under one second.

The function stats_ord_secure(p) does take a few minutes to run on the safe primes that you were asked to use. However, these just are 3 to 6 minutes, so it won't eat up all of your vacation. I do not see why there was an expectation this function will terminate in seconds. Computationally heavy tasks do take time, and computers are not omnipotent!

Below I included some relevant executions from the IDLE shell of my (4 years old) MAC. Checking primality of

$p=2^{1051}+2627451$ took 2.56 seconds (and similar time for is_prime(q), $q=(p-1)/2$).

Finding the orders of 10,000 random samples inside stats_ord_secure(p) took 256 seconds on my MAC, or just over 4 minutes.

So removing the is_prime(p) and is_prime(q) tests will have a very small impact on the overall running time, and I would suggest you will not remove it.

>>> p=2**1051+2627451 >>> is_prime(p) True >>> is_prime((p-1)//2) True >>> elapsed("is_prime(p)") 2.56 >>> elapsed("is_prime((p-1)//2)") 2.5400000000000205 >>> elapsed("stats_ord_secure(p)") 256.28000000000003

This is incorrect assessment of these functions.

The call stack in this case is:

stats_ord_secure»order_safe_prime»is_prime.

Testing these functions on my computer I receive:

p=2**1051+2627451 >>> elapsed("is_prime(p)") 1.7375235156237068 >>> elapsed("is_prime((p-1)//2)") 1.6881974769269164 >>> elapsed("order_safe_prime(156472373270, p)") #Chose a random number 3.454178799566762

This shows that running without is_prime, order_safe_prime would run approximately 0.02845780701613876 seconds instead of 3.454178799566762 seconds. Thats approx 121 times faster!

Assuming 3.45 seconds for each number in stats_ord_prime, since it runs 10,000 times, I would get a run-time of 35400 seconds, or 590 minutes. Running without is_prime would take approx 287 sec = 4 minutes.

That stats_ord_secured ran in 256 seconds for you I can only explain as either luck with the randomly generated numbers or, with all due respect (of which there is much), some error on your part, such as running a different version of one or more of the functions than we were given.

Hi,

for question 5)2. where can I find the number of actions that the function "pow" takes? can I use this phyton function in this question?

Thanks!

In addition, can I ignore the actions that "is_prime" takes assuming that I got a valid input?

No. Do not ignore the primality testing in the analysis.

You can certainly use the built in function "pow" in your code.

I have just answered a question about the computational complexity of "pow" in page 3 of the thread "**General Forum**".

היי,

ב5 א:

1. האם g קטן ממש מp, כמו שכתוב בקוד:

if g<1 or g>p-1:

או שg יכול להיות שווה לp?

2. בקוד התבצעה בדיקת קלט חלקית- לא בדקנו האם g ראשוני, האם יש סיבה לכך?

3. בחישוב הסיבוכיות האם ניתן להניח שהסיבוכיות של is_prime היא קבועה?

תודה.

You mix Hebrew and English without following the "Writing in Hebrew" tips, causing your post to be garbled.

If I correctly interpret the question, you ask about the possibility of g=p. This was an error in the earlier version of HW4, q5. It is now fixed (see HW page). The condition on g should be $1\leq g \leq p-1$.

For p which **is** a prime, is_prime(p) takes 100 times pow(a,p-1,p) calls for random a's in the appropriate range. It is definitely **not** a task with $O(1)$ complexity.

בשאלה 5, סעיפים 1,2:

האם ניתן להשמיט מניתוח הסיבוכיות את הסיבוכיות של הפונקציה שבודקת האם המספר ראשוני או לא (הפונקציה "האם_ראשוני").

אני רוצה להדגיש שני דברים:

1. קודם, ממה שהבנתי שאלו על השמטת בדיקת הראשוניות מהקוד (אולי אני טועה). זה לא מה שאני שואל.

2. הפונקציה האני שואל עליה מכילה, בין היתר, את פעולת מכפלת מודולו.

In question 5.2, How do I check that my code is correct?

Could you give us some input and output examples to check on them?

Try some small safe primes. For example p=23 or p=59.

In question 5.2:

1. Are we still operating on the same assumption (posed in question 5.1) that 1<=g<=p-1 ?

2. This question refers to number of "operations" that the function executes. Do you mean "bit operations" (as in 5.1)?

3. Should we count only modulo operations as we did in 5.1 ?

4. Should we address the complexity of the random.randint() command in is_prime()? If so, what is the complexity of this function?

Thanks

You can assume that random.randint() takes a constant number of operations.

In all the rest, we do mean bit operations.

The bounds on g are well specified in the current (updated) version of the question, I believe.

שאלה… כמה תעלה לי פעולה אחת כזאת :

(pow(g,q)%p)

נתון כי הכפלה של שניים וביצוע מודולו זה n^2

על ביטים…

השאלה אם גם pow(g,q)

ואז ביצוע מודולו בפעם אחת בודדת הוא גם n^2

(pow(g,q)%p) is not defined. As to pow, please refer to my response (mentioned earlier in this thread).