Sei sulla pagina 1di 93

A STUDY ON

“MODULAR ARITHMETIC”
Contents
Introduction
Chapter I: Divisibility
1. Found a Lions
2. Division Algorithm
3. Divisibility Rules
4. Greatest common Divisor
5. Euclid’s Algorithm
6. Fundamental Theorem
7. Properties of Primes
Chapter II: Introduction
1. Montgomery Reduction
2. Modular Exponentiation
3. Linear Congruence Theorem
4. Method of Successive Substitution
5. Chinese Reminder Theorem
6. Fermat’s little Theorem
7. Fermat Quotient
8. Euler Quotient
9. Euler Totient Function
a) Noncototient
b) Nontotient
10. Euler and Wilson’s Theorem
11. Primitive Root Modulation
a) Multiplicative Order
b) Discreate Logarithm
12. Quadratic Residue
13. Euler’s Criterion
14. Legendre Symbol
15. Gauss’s Lemma (Number Theorem)

Chapter III: Modular Arithmetic


1. Introduction
2. Definition (with examples)
3. Rules of Modular Arithmetic
a) Addition
b) Multiplication
4. Reminders
5. Standard Representation
6. Modular Operations

Chapter IV: Applications of Modular Arithmetic


1. Applications of Modular Arithmetic
2. Solving Linear Congruence
3. Arithmetic with Large Integers
4. Congruence of Squares
5. Luhn Formula
6. Mod n Cryptanalysis
CHAPTER-1
INTRODUCTION
INTRODUCTION
DIVISIBILITY RULES
Problem: Is the number 621 prime or composite?

Method: In the last lesson, we learned to find all factors of a whole number to determine if it is
prime or composite. We used the procedure listed below.

To determine if a number is prime or composite, follow these steps:

1. Find all factors of the number.


2. If the number has only two factors, 1 and itself, then it is prime.
3. If the number has more than two factors, then it is composite.

The above procedure works very well for small numbers. However, it would be time-consuming
to find all factors of 621. Thus we need a better method for determining if a large number is
prime or composite. Every number has one and itself as a factor. Thus, if we could find one
factor of 621, other than 1 and itself, we could prove that 621 is composite. One way to find
factors of large numbers quickly is to use tests for divisibility.

Definition Example

One whole number is divisible by another 18 is divisible by 9 since 18 ÷ 9


if, after dividing, the remainder is zero. = 2 with a remainder of 0.

If one whole number is divisible by another Since 18 is divisible by 9, 9 is a


number, then the second number is a factor factor of 18.
of the first number.

A divisibility test is a rule for determining Divisibility Test for 3: if the


whether one whole number is divisible by sum of the digits of a number is
another. It is a quick way to find factors of divisible by 3, then the number
large numbers. is divisible by 3.

We can test for divisibility by 3 (see table above) to quickly find a factor of 621 other than 1 and
itself. The sum of the digits of 621 is 6+2+1 = 9. This divisibility test and the definitions above
tell us that...

 621 is divisible by 3 since the sum of its digits (9) is divisible by 3.


 Since 621 is divisible by 3, 3 is a factor of 621.
 Since the factors of 621 include 1, 3 and 621, we have proven that 621 has more than two
factors.
 Since 621 has more than 2 factors, we have proven that it is composite.

Let's look at some other tests for divisibility and examples of each.

Divisibility Tests Example

A number is divisible by 2 if the last 168 is divisible by 2 since the last digit
digit is 0, 2, 4, 6 or 8. is 8.

A number is divisible by 3 if the 168 is divisible by 3 since the sum of


sum of the digits is divisible by 3. the digits is 15 (1+6+8=15), and 15 is
divisible by 3.

A number is divisible by 4 if the 316 is divisible by 4 since 16 is


number formed by the last two digits divisible by 4.
is divisible by 4.

A number is divisible by 5 if the last 195 is divisible by 5 since the last digit
digit is either 0 or 5. is 5.

A number is divisible by 6 if it is 168 is divisible by 6 since it is


divisible by 2 AND it is divisible by divisible by 2 AND it is divisible by 3.
3.

A number is divisible by 8 if the 7,120 is divisible by 8 since 120 is


number formed by the last three divisible by 8.
digits is divisible by 8.

A number is divisible by 9 if the 549 is divisible by 9 since the sum of


sum of the digits is divisible by 9. the digits is 18 (5+4+9=18), and 18 is
divisible by 9.

A number is divisible by 10 if the 1,470 is divisible by 10 since the last


last digit is 0. digit is 0.

Let's look at some examples in which we test the divisibility of a single whole number.

Example 1: Determine whether 150 is divisible by 2, 3, 4, 5, 6, 9 and 10.

150 is divisible by 2 since the last digit is 0.


150 is divisible by 3 since the sum of the digits is 6 (1+5+0 = 6), and 6 is divisible by 3.

150 is not divisible by 4 since 50 is not divisible by 4.

150 is divisible by 5 since the last digit is 0.

150 is divisible by 6 since it is divisible by 2 AND by 3.

150 is not divisible by 9 since the sum of the digits is 6, and 6 is not divisible by 9.

150 is divisible by 10 since the last digit is 0.

Solution: 150 is divisible by 2, 3, 5, 6, and 10.

Example 2: Determine whether 225 is divisible by 2, 3, 4, 5, 6, 9 and 10.

225 is not divisible by 2 since the last digit is not 0, 2, 4, 6 or 8.

225 is divisible by 3 since the sum of the digits is 9, and 9 is divisible by 3.

225 is not divisible by 4 since 25 is not divisible by 4.

225 is divisible by 5 since the last digit is 5.

225 is not divisible by 6 since it is not divisible by both 2 and 3.

225 is divisible by 9 since the sum of the digits is 9, and 9 is divisible by 9.

225 is not divisible by 10 since the last digit is not 0.

Solution: 225 is divisible by 3, 5 and 9.

Example 3: Determine whether 7,168 is divisible by 2, 3, 4, 5, 6, 8, 9 and 10.

7,168 is divisible by 2 since the last digit is 8.

7,168 is not divisible by 3 since the sum of the digits is 22, and 22 is not divisible by 3.

7,168 is divisible by 4 since 168 is divisible by 4.


7,168 is not divisible by 5 since the last digit is not 0 or 5.

7,168 is not divisible by 6 since it is not divisible by both 2 and 3.

7,168 is divisible by 8 since the last 3 digits are 168, and 168 is divisible by 8.

7,168 is not divisible by 9 since the sum of the digits is 22, and 22 is not divisible by 9.

7,168 is not divisible by 10 since the last digit is not 0 or 5.

Solution: 7,168 is divisible by 2, 4 and 8.

Example 4: Determine whether 9,042 is divisible by 2, 3, 4, 5, 6, 8, 9 and 10.

9,042 is divisible by 2 since the last digit is 2.

9,042 is divisible by 3 since the sum of the digits is 15, and 15 is divisible by 3.

9,042 is not divisible by 4 since 42 is not divisible by 4.

9,042 is not divisible by 5 since the last digit is not 0 or 5.

9,042 is divisible by 6 since it is divisible by both 2 and 3.

9,042 is not divisible by 8 since the last 3 digits are 042, and 42 is not divisible by 8.

9,042 is not divisible by 9 since the sum of the digits is 15, and 15 is not divisible by 9.

9,042 is not divisible by 10 since the last digit is not 0 or 5.

Solution: 9,042 is divisible by 2, 3 and 6.


Example 5: Determine whether 35,120 is divisible by 2, 3, 4, 5, 6, 8, 9 and 10.

35,120 is divisible by 2 since the last digit is 0.

35,120 is not divisible by 3 since the sum of the digits is 11, and 11 is not divisible by 3.

35,120 is divisible by 4 since 20 is divisible by 4.

35,120 is divisible by 5 since the last digit is 0.

35,120 is not divisible by 6 since it is not divisible by both 2 and 3.

35,120 is divisible by 8 since the last 3 digits are 120, and 120 is divisible by 8.

35,120 is not divisible by 9 since the sum of the digits is 11, and 11 is not divisible by 9.

35,120 is divisible by 10 since the last digit is 0.

Solution: 35,120 is divisible by 2, 4, 5, 8 and 10.

Example 6: Is the number 91 prime or composite? Use divisibility when possible to find your
answer.

91 is not divisible by 2 since the last digit is not 0, 2, 4, 6 or 8.

91 is not divisible by 3 since the sum of the digits (9+1=10) is not divisible by 3.

91 is not evenly divisible by 4 (remainder is 3).

91 is not divisible by 5 since the last digit is not 0 or 5.

91 is not divisible by 6 since it is not divisible by both 2 and 3.

91 divided by 7 is 13.
Solution: The number 91 is divisible by 1, 7, 13 and 91. Therefore 91 is composite since it has
more than two factors.

Summary: Divisibility tests can be used to find factors of large whole numbers quickly, and thus
determine if they are prime or composite. When working with large whole numbers, tests for
divisibility are more efficient than the traditional factoring method.

2. DIVISION ALGORITHM
This article is about algorithms for division of integers. For the theorem proving the existence of
a unique quotient and remainder, see Euclidean division. For the division algorithm for
polynomials, see polynomial long division.

A division algorithm is an algorithm which, given two integers N and D, computes their
quotient and/or remainder, the result of Euclidean division. Some are applied by hand,
while others are employed by digital circuit designs and software.

Division algorithms fall into two main categories: slow division and fast division. Slow division
algorithms produce one digit of the final quotient per iteration. Examples of slow division
include restoring, non-performing restoring, non-restoring, and SRT division. Fast
division methods start with a close approximation to the final quotient and produce twice
as many digits of the final quotient on each iteration. Newton–Raphson and Goldschmidt
algorithms fall into this category.

Variants of these algorithms allow using fast multiplication algorithms. It results that, for large
integers, the computer time needed for a division is the same, up to a constant factor, as
the time needed for a multiplication, whichever multiplication algorithm is used.

Discussion will refer to the form , where

 N = Numerator (dividend)
 D = Denominator (divisor)

is the input, and

 Q = Quotient
 R = Remainder

he simplest division algorithm, historically incorporated into a greatest common divisor


algorithm presented in Euclid's Elements, Book VII, Proposition 1, finds the remainder given
two positive integers using only subtractions and comparisons:

while N ≥ D do
N := N − D
end
return N

The proof that the quotient and remainder exist and are unique (described at Euclidean division)
gives rise to a complete division algorithm using additions, subtractions, and comparisons:

function divide(N, D)
if D = 0 then error(DivisionByZero) end
if D < 0 then (Q, R) := divide(N, −D); return (−Q, R) end
if N < 0 then
(Q,R) := divide(−N, D)
if R = 0 then return (−Q, 0)
else return (−Q − 1, D − R) end
end
-- At this point, N ≥ 0 and D > 0
return divide_unsigned(N, D)
end
function divide_unsigned(N, D)
Q := 0; R := N
while R ≥ D do
Q := Q + 1
R := R − D
end
return (Q, R)
end
This procedure always produces R ≥ 0. Although very simple, it takes Ω(Q) steps, and so is
exponentially slower than even slow division algorithms like long division. It is useful if Q is
known to be small (being an output-sensitive algorithm), and can serve as an executable
specification.

Long division

Main article: Long division § Algorithm for arbitrary base

Long division is the standard algorithm used for pen-and-paper division of multi-digit numbers
expressed in decimal notation. It shifts gradually from the left to the right end of the dividend,
subtracting the largest possible multiple of the divisor (at the digit level) at each stage; the
multiples then become the digits of the quotient, and the final difference is then the remainder.

When used with a binary radix, this method forms the basis for the (unsigned) integer division
with remainder algorithm below. Short division is an abbreviated form of long division suitable
for one-digit divisors. Chunking – also known as the partial quotients method or the hangman
method – is a less-efficient form of long division which may be easier to understand. By
allowing one to subtract more multiples than what one currently has at each stage, a more
freeform variant of long division can be developed as well[1]

Integer division (unsigned) with remainder

Main article: Long division § Binary division

See also: Binary number § Division

The following algorithm, the binary version of the famous long division, will divide N by D,
placing the quotient in Q and the remainder in R. In the following code, all values are treated as
unsigned integers.

if D = 0 then error(DivisionByZeroException) end


Q := 0 -- Initialize quotient and remainder to zero
R := 0
for i := n − 1 .. 0 do -- Where n is number of bits in N
R := R << 1 -- Left-shift R by 1 bit
R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator
if R ≥ D then
R := R − D
Q(i) := 1
end
end

Example

If we take N=11002 (1210) and D=1002 (410)

Step 1: Set R=0 and Q=0


Step 2: Take i=3 (one less than the number of bits in N)
Step 3: R=00 (left shifted by 1)
Step 4: R=01 (setting R(0) to N(i))
Step 5: R<D, so skip statement

Step 2: Set i=2


Step 3: R=010
Step 4: R=011
Step 5: R<D, statement skipped

Step 2: Set i=1


Step 3: R=0110
Step 4: R=0110
Step 5: R>=D, statement entered
Step 5b: R=10 (R−D)
Step 5c: Q=10 (setting Q(i) to 1)

Step 2: Set i=0


Step 3: R=100
Step 4: R=100
Step 5: R>=D, statement entered
Step 5b: R=0 (R−D)
Step 5c: Q=11 (setting Q(i) to 1)

end
Q=112 (310) and R=0.

Slow division methods

Slow division methods are all based on a standard recurrence equation

where:

• Rj is the j-th partial remainder of the division

• B is the radix (base, usually 2 internally in computers and calculators)

• q n − (j + 1) is the digit of the quotient in position n−(j+1), where the digit positions
are numbered from least-significant 0 to most significant n−1

• n is number of digits in the quotient

• D is the divisor

Restoring division

Restoring division operates on fixed-point fractional numbers and depends on the


assumption 0 < D < N.[citation needed]

The quotient digits q are formed from the digit set {0,1}.

The basic algorithm for binary (radix 2) restoring division is:

R := N

D := D << n -- R and D need twice the word width of N and Q


for i := n − 1 .. 0 do -- For example 31..0 for 32 bits

R := 2 * R − D -- Trial subtraction from shifted value (multiplication by 2 is a shift in


binary representation)

if R ≥ 0 then

q(i) := 1 -- Result-bit 1

else

q(i) := 0 -- Result-bit 0

R := R + D -- New partial remainder is (restored) shifted value

end

end

-- Where: N = Numerator, D = Denominator, n = #bits, R = Partial remainder, q(i) = bit #i of


quotient

The above restoring division algorithm can avoid the restoring step by saving the shifted value
2R before the subtraction in an additional register T (i.e., T = R << 1) and copying register T to
R when the result of the subtraction 2R − D is negative.

Non-performing restoring division is similar to restoring division except that the value of 2R is
saved, so D does not need to be added back in for the case of R < 0.

Non-restoring division

Non-restoring division uses the digit set {−1, 1} for the quotient digits instead of {0, 1}. The
algorithm is more complex, but has the advantage when implemented in hardware that there is
only one decision and addition/subtraction per quotient bit; there is no restoring step after the
subtraction, which potentially cuts down the numbers of operations by up to half and lets it be
executed faster.[2] The basic algorithm for binary (radix 2) non-restoring division of non-
negative numbers is:

R := N

D := D << n -- R and D need twice the word width of N and Q

for i = n − 1 .. 0 do -- for example 31..0 for 32 bits

if R >= 0 then

q[i] := +1

R := 2 * R − D

else

q[i] := −1

R := 2 * R + D

end if

end

-- Note: N=Numerator, D=Denominator, n=#bits, R=Partial remainder, q(i)=bit #i of quotient.

Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and
+1. This form needs to be converted to binary to form the final quotient. Example:

If the −1 digits of are stored as zeros (0) as is common, then is and computing is trivial:
perform a one's complement (bit by bit complement) on the original .

Q := Q − bit. Not(Q) * Appropriate if −1 Digits in Q are Represented as zeros as is common.

Finally, quotients computed by this algorithm are always odd, and the remainder in R is in the
range −D ≤ R < D. For example, 5 / 2 = 3 R −1. To convert to a positive remainder, do a single
restoring step after Q is converted from non-standard form to standard form:
if R < 0 then

Q := Q − 1

R := R + D -- Needed only if the Remainder is of interest.

end if

The actual remainder is R >> n. (As with restoring division, the low-order bits of R are used up
at the same rate as bits of the quotient Q are produced, and it is common to use a single shift
register for both.)

SRT division

Named for its creators (Sweeney, Robertson, and Tocher), SRT division is a popular method for
division in many microprocessor implementations.[3][4] SRT division is similar to non-restoring
division, but it uses a lookup table based on the dividend and the divisor to determine each
quotient digit.

The most significant difference is that a redundant representation is used for the quotient. For
example, when implementing radix-4 SRT division, each quotient digit is chosen from five
possibilities: { −2, −1, 0, +1, +2 }. Because of this, the choice of a quotient digit need not be
perfect; later quotient digits can correct for slight errors. (For example, the quotient digit pairs (0,
+2) and (1, −2) are equivalent, since 0×4+2 = 1×4−2.) This tolerance allows quotient digits to be
selected using only a few most-significant bits of the dividend and divisor, rather than requiring a
full-width subtraction. This simplification in turn allows a radix higher than 2 to be used.

Like non-restoring division, the final steps are a final full-width subtraction to resolve the last
quotient bit, and conversion of the quotient to standard binary form.

The Intel Pentium processor's infamous floating-point division bug was caused by an incorrectly
coded lookup table. Five of the 1066 entries had been mistakenly omitted.[5][6]

Fast division methods


Newton–Raphson division

Newton–Raphson uses Newton's method to find the reciprocal of and multiply that reciprocal by
to find the final quotient .

The steps of Newton–Raphson division are:

1. Calculate an estimate for the reciprocal of the divisor .

2. Compute successively more accurate estimates of the reciprocal. This is where one
employs the Newton–Raphson method as such.

3. Compute the quotient by multiplying the dividend by the reciprocal of the divisor: .

In order to apply Newton's method to find the reciprocal of , it is necessary to find a function
that has a zero at . The obvious such function is , but the Newton–Raphson iteration for this is
unhelpful, since it cannot be computed without already knowing the reciprocal of (moreover it
attempts to compute the exact reciprocal in one step, rather than allow for iterative
improvements). A function that does work is , for which the Newton–Raphson iteration gives

which can be calculated from using only multiplication and subtraction, or using two fused
multiply–adds.

From a computation point of view, the expressions and are not equivalent. To obtain a result
with a precision of 2n bits while making use of the second expression, one must compute the
product between and with double the given precision of (n bits).[citation needed] In contrast,
the product between and need only be computed with a precision of n bits, because the leading
n bits (after the binary point) of are zeros.

If the error is defined as , then:

This squaring of the error at each iteration step – the so-called quadratic convergence of
Newton–Raphson's method – has the effect that the number of correct digits in the result roughly
doubles for every iteration, a property that becomes extremely valuable when the numbers
involved have many digits (e.g. in the large integer domain). But it also means that the initial
convergence of the method can be comparatively slow, especially if the initial estimate is poorly
chosen.

For the subproblem of choosing an initial estimate , it is convenient to apply a bit-shift to the
divisor D to scale it so that 0.5 ≤ D ≤ 1; by applying the same bit-shift to the numerator N, one
ensures the quotient does not change. Then one could use a linear approximation in the form

to initialize Newton–Raphson. To minimize the maximum of the absolute value of the error of
this approximation on interval , one should use

The coefficients of the linear approximation are determined as follows. The absolute value of the
error is . The minimum of the maximum absolute value of the error is determined by the
Chebyshev equioscillation theorem applied to . The local minimum of occurs when , which has
solution . The function at that minimum must be of opposite sign as the function at the endpoints,
namely, The two equations in the two unknowns have a unique solution and , and the
maximum error is . Using this approximation, the absolute value of the error of the initial value
is less than

It is possible to generate a polynomial fit of degree larger than 1, computing the coefficients
using the Remez algorithm. The trade-off is that the initial guess requires more computational
cycles but hopefully in exchange for fewer iterations of Newton–Raphson.

Since for this method the convergence is exactly quadratic, it follows that

steps are enough to calculate the value up to binary places. This evaluates to 3 for IEEE single
precision and 4 for both double precision and double extended formats.

Pseudocode

The following computes the quotient of N and D with a precision of P binary places:

Express D as M × 2e where 1 ≤ M < 2 (standard floating point representation)

D' := D / 2e+1 // scale between 0.5 and 1, can be performed with bit shift / exponent subtraction
N' := N / 2e+1

X := 48/17 − 32/17 × D' // precompute constants with same precision as D

repeat times // can be precomputed based on fixed P

X := X + X × (1 - D' × X)

end

return N' × X

For example, for a double-precision floating-point division, this method uses 10 multiplies, 9
adds, and 2 shifts.

Variant Newton–Raphson division

The Newton-Raphson division method can be modified to be slightly faster as follows. After
shifting N and D so that D is in [0.5, 1.0], initialize with

This is the best quadratic fit to 1/D and gives an absolute value of the error less than or equal to
1/99. It is chosen to make the error equal to a re-scaled third order Chebyshev polynomial of the
first kind. The coefficients should be pre-calculated and hard-coded.

Then in the loop, use an iteration which cubes the error.

The Y•E term is new.

If the loop is performed until X agrees with 1/D on its leading P bits, then the number of
iterations will be no more than

which is the number of times 99 must be cubed to get to 2P+1. Then

is the quotient to P bits.


Using higher degree polynomials in either the initialization or the iteration results in a
degradation of performance because the extra multiplications required would be better spent on
doing more iterations.

Goldschmidt division

Goldschmidt division[7] (after Robert Elliott Goldschmidt[8]) uses an iterative process of


repeatedly multiplying both the dividend and divisor by a common factor Fi, chosen such that the
divisor converges to 1. This causes the dividend to converge to the sought quotient Q:

The steps for Goldschmidt division are:

1. Generate an estimate for the multiplication factor Fi .

2. Multiply the dividend and divisor by Fi .

3. If the divisor is sufficiently close to 1, return the dividend, otherwise, loop to step 1.

Assuming N/D has been scaled so that 0 < D < 1, each Fi is based on D:

Multiplying the dividend and divisor by the factor yields:

After a sufficient number k of iterations .

The Goldschmidt method is used in AMD Athlon CPUs and later models.[9][10] It is also known
as Anderson Earle Goldschmidt Powers (AEGP) algorithm and is implemented by various IBM
processors.[11][12] Large-integer methods

Methods designed for hardware implementation generally do not scale to integers with thousands
or millions of decimal digits; these frequently occur, for example, in modular reductions in
cryptography. For these large integers, more efficient division algorithms transform the problem
to use a small number of multiplications, which can then be done using an asymptotically
efficient multiplication algorithm such as the Karatsuba algorithm, Toom–Cook multiplication or
the Schönhage–Strassen algorithm. The result is that the computational complexity of the
division is of the same order (up to a multiplicative constant) as that of the multiplication.
Examples include reduction to multiplication by Newton's method as described above,[13] as well
as the slightly faster Barrett reduction and Montgomery reduction algorithms.[14][verification needed]
Newton's method is particularly efficient in scenarios where one must divide by the same divisor
many times, since after the initial Newton inversion only one (truncated) multiplication is needed
for each division.

Division by a constant

The division by a constant D is equivalent to the multiplication by its reciprocal. Since the
denominator is constant, so is its reciprocal (1/D). Thus it is possible to compute the value of
(1/D) once at compile time, and at run time perform the multiplication N·(1/D) rather than the
division N/D. In floating-point arithmetic the use of (1/D) presents little problem, but in integer
arithmetic the reciprocal will always evaluate to zero (assuming |D| > 1).

It is not necessary to use specifically (1/D); any value (X/Y) that reduces to (1/D) may be used.
For example, for division by 3, the factors 1/3, 2/6, 3/9, or 194/582 could be used. Consequently,
if Y were a power of two the division step would reduce to a fast right bit shift. The effect of
calculating N/D as (N·X)/Y replaces a division with a multiply and a shift. Note that the
parentheses are important, as N·(X/Y) will evaluate to zero.

However, unless D itself is a power of two, there is no X and Y that satisfies the conditions
above. Fortunately, (N·X)/Y gives exactly the same result as N/D in integer arithmetic even when
(X/Y) is not exactly equal to 1/D, but "close enough" that the error introduced by the
approximation is in the bits that are discarded by the shift operation.[15][16][17]

As a concrete fixed-point arithmetic example, for 32-bit unsigned integers, division by 3 can be
replaced with a multiply by 2863311531/233 , a multiplication by 2863311531 (hexadecimal
0xAAAAAAAB) followed by a 33 right bit shift. The value of 2863311531 is calculated as
233/3, then rounded up.

Likewise, division by 10 can be expressed as a multiplication by 3435973837 (0xCCCCCCCD)


followed by division by 235 (or 35 right bit shift).
In some cases, division by a constant can be accomplished in even less time by converting the
"multiply by a constant" into a series of shifts and adds or subtracts.[18] Of particular interest is
division by 10, for which the exact quotient is obtained, with remainder if required.[19]

4.GREATEST COMMON DIVISOR


The greatest common divisor (GCD), also called the greatest common factor, of two
numbers is the largest number that divides them both. For instance, the greatest common factor
of 20 and 15 is 5, since 5 divides both 20 and 15 and no larger number has this property. The
concept is easily extended to sets of more than two numbers: the GCD of a set of numbers is the
largest number dividing each of them.

The GCD is used for a variety of applications in number theory, particularly in modular
arithmetic and thus encryption algorithms such as RSA. It is also used for simpler applications,
such as simplifying fractions. This makes the GCD a rather fundamental concept to number
theory, and as such a number of algorithms have been discovered to efficiently compute it.

The GCD is traditionally notated as gcd(a,b)\text{gcd}(a, b)gcd(a,b), or when the context is


clear, simply (a,b)(a, b)(a,b).

Computing the greatest common divisor

The GCD of several numbers may be computed by simply listing the factors of each number and
determining the largest common one. While in practice this is terribly inefficient, for particularly
small cases it is doable by hand. The process may be split up using the method of factor pairs:
once one determines a factor aaa of a number nnn, the quotient na\frac{n}{a}an is necessarily a
factor as well. For instance, since 2 is a factor of 24, 242=12\frac{24}{2} = 12224=12 is a factor
as well.

Find the greatest common divisor of 30,36, 30, 36,30,36, and 24.24.24.

The divisors of each number are given by


30:1,2,3,5,6,10,15,3036:1,2,3,4,6,9,12,18,3624:1,2,4,6,12,24 \begin{aligned} 30: &1, 2, 3, 5, 6,
10, 15, 30 \\ 36: &1, 2, 3, 4, 6, 9, 12, 18, 36 \\ 24: &1, 2, 4, 6, 12, 24 \end{aligned} 30:36:24:
1,2,3,5,6,10,15,301,2,3,4,6,9,12,18,361,2,4,6,12,24

The largest number that appears on every list is 6, 6,6, so this is the greatest common divisor:

gcd⁡(30,36,24)=6. □ \gcd(30,36,24)=6.\ _\squaregcd(30,36,24)=6. □

When the numbers are large, the list of factors can be prohibitively long making the above
method very difficult. A somewhat more efficient method is to first compute the prime
factorization of each number in the set. The resulting GCD is the product of the primes that
appear in every factorization, to the smallest exponent seen in the factorizations. This is
confusing in words, so let's see an example:

Compute gcd⁡(4200,3780,3528) \gcd(4200,3780,3528)gcd(4200,3780,3528).

We have

4200=23⋅3⋅52⋅73780=22⋅33⋅5⋅73528=23⋅32 ⋅72.\begin{aligned} 4200 &= 2^3 \cdot


3^{\phantom 1} \cdot 5^2 \cdot 7 \\ 3780 &= 2^2 \cdot 3^3 \cdot 5^{\phantom 1} \cdot 7 \\ 3528
&= 2^3 \cdot 3^2 \ \ \phantom{\cdot 5^0} \cdot 7^2. \end{aligned}420037803528
=23⋅31⋅52⋅7=22⋅33⋅51⋅7=23⋅32 ⋅50⋅72.

Since 2 appears in each of these factorizations, it will appear in the GCD as well. It is taken to
the smallest power seen in the factorizations, which in this case is 2. So the GCD will contain
222^222 in its factorization. Continuing along these lines, we obtain a GCD of

22⋅3⋅7=84. □ 2^2 \cdot 3 \cdot 7 = 84.\ _\square22⋅3⋅7=84. □

In formal notation, if the prime factorizations of aaa and bbb are

a=p1α1p2α2…pkαk,b=p1β1p2β2…pkβk,a = p_1 ^{\alpha_1} p_2 ^{\alpha_2} \ldots p_k


^{\alpha_k}, \quad b = p_1 ^{\beta_1} p_2 ^ {\beta_2} \ldots p_k ^ {\beta_k},a=p1α1p2α2
…pkαk,b=p1β1p2β2…pkβk,
where the pip_ipi are distinct primes and the αi\alpha_iαi and βi \beta_iβi are nonnegative
integers, then

gcd⁡(a,b)=p1min⁡(α1,β1)p2min⁡(α2,β2)…pkmin⁡(αk,βk). \gcd(a,b) = p_1


^{\min(\alpha_1, \beta_1)} p_2 ^{\min(\alpha_2, \beta_2)} \ldots p_k ^{\min(\alpha_k, \beta_k)}
. gcd(a,b)=p1min(α1,β1)p2min(α2,β2)…pkmin(αk,βk).

A similar formula holds for finding the GCD of several integers, by taking the smallest exponent
for each prime.

Three gold coins of weight 780 g, 840 g, and 960 g are cut into small pieces of equal weight. If it
takes 2 people to transport one piece of gold, what is the fewest number of people that are
needed to transport all these pieces?

While the prime factorization method is often the most practical to do by hand, occasionally
determining the prime factorization is very difficult, in which case an alternate approach
becomes necessary. Generally, in these cases, algorithms are used as in the next section

5.EUCLID’S ALGORITHM
GCD of two numbers is the largest number that divides both of them. A simple way to find GCD
is to factorize both numbers and multiply common factors.
GCD
Basic Euclidean Algorithm for GCD
The algorithm is based on below facts.
If we subtract smaller number from larger (we reduce larger number), GCD doesn’t change.
So if we keep subtracting repeatedly the larger of two, we end up with GCD.

Now instead of subtraction, if we divide smaller number, the algorithm stops when we find
remainder 0.

6.Fundamental Theorem Of Calculus


The fundamental theorem of calculus is a theorem that links the concept of differentiating a
function with the concept of integrating a function.
The first part of the theorem, sometimes called the first fundamental theorem of calculus,
states that one of the antiderivatives (also called indefinite integral), say F, of some function f
may be obtained as the integral of f with a variable bound of integration. This implies the
existence of antiderivatives for continuous functions.[1]

Conversely, the second part of the theorem, sometimes called the second fundamental theorem
of calculus, states that the integral of a function f over some interval can be computed by using
any one, say F, of its infinitely many antiderivatives. This part of the theorem has key practical
applications, because explicitly finding the antiderivative of a function by symbolic integration
avoids numerical integration to compute integrals. This provides generally a better numerical
accuracy.

7. PROPERTIES OF PRIMES
A prime number (or a prime) is a natural number greater than 1 that cannot be formed by
multiplying two smaller natural numbers. A natural number greater than 1 that is not prime is
called a composite number. For example, 5 is prime because the only ways of writing it as a
product, 1 × 5 or 5 × 1, involve 5 itself. However, 6 is composite because it is the product of two
numbers (2 × 3) that are both smaller than 6. Primes are central in number theory because of the
fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or
can be factorized as a product of primes that is unique up to their order.

The property of being prime is called primality. A simple but slow method of checking the
primality of a given number n {\displaystyle n} n, called trial division, tests whether n
{\displaystyle n} n is a multiple of any integer between 2 and n {\displaystyle {\sqrt {n}}} {\sqrt
{n}}. Faster algorithms include the Miller–Rabin primality test, which is fast but has a small
chance of error, and the AKS primality test, which always produces the correct answer in
polynomial time but is too slow to be practical. Particularly fast methods are available for
numbers of special forms, such as Mersenne numbers. As of December 2018 the largest known
prime number has 24,862,048 decimal digits.

There are infinitely many primes, as demonstrated by Euclid around 300 BC. No known simple
formula separates prime numbers from composite numbers. However, the distribution of primes
within the natural numbers in the large can be statistically modelled. The first result in that
direction is the prime number theorem, proven at the end of the 19th century, which says that the
probability of a randomly chosen number being prime is inversely proportional to its number of
digits, that is, to its logarithm.

Several historical questions regarding prime numbers are still unsolved. These include
Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two
primes, and the twin prime conjecture, that there are infinitely many pairs of primes having just
one even number between them. Such questions spurred the development of various branches of
number theory, focusing on analytic or algebraic aspects of numbers. Primes are used in several
routines in information technology, such as public-key cryptography, which relies on the
difficulty of factoring large numbers into their prime factors. In abstract algebra, objects that
behave in a generalized way like prime numbers include prime elements and prime ideals.
CHAPTER II: INTRODUCTION
16.Montgomery Reduction
17.Modular Exponentiation
18.Linear Congruence Theorem
19.Method of Successive Substitution
20.Chinese Reminder Theorem
21.Fermat’s little Theorem
22.Fermat Quotient
23.Euler Quotient
24.Euler Totient Function
c) Noncototient
d) Nontotient
25.Euler and Wilson’s Theorem
26.Primitive Root Modulation
c) Multiplicative Order
d) Discreate Logarithm
27.Quadratic Residue
28.Euler’s Criterion
29.Legendre Symbol
30.Gauss’s Lemma (Number Theorem)
1.6 Montgomery Reduction

In modular arithmetic computation, Montgomery modular multiplication, more


commonly referred to as Montgomery multiplication, is a method for performing fast
modular multiplication. It was introduced in 1985 by the American mathematician Peter L.
Montgomery.[1][2]
Given two integers a and b and modulus N, the classical modular multiplication algorithm
computes the double-width product ab, and then performs a division, subtracting multiples
of N to cancel out the unwanted high bits until the remainder is once again less than N.
Montgomery reduction instead adds multiples of N to cancel out the low bits until the
result is a multiple of a convenient (i.e. power of two) constant R > N. Then the low bits are
discarded, producing a result less than 2N. One conditional final subtraction reduces this to
less than N. This procedure has a better computational complexity than standard division
algorithms, since it avoids the quotient digit estimation and correction that they need.
The result is the desired product divided by R, which is less inconvenient than it might
appear. To multiply a and b, they are first converted to Montgomery form or
Montgomery representation aR mod N and bR mod N. When multiplied, these produce
abR2 mod N, and the following Montgomery reduction produces abR mod N, the
Montgomery form of the desired product. (A final second Montgomery reduction converts
out of Montgomery form.)

Converting to and from Montgomery form makes this slower than the conventional or
Barrett reduction algorithms for a single multiply. However, when performing many
multiplications in a row, as in modular exponentiation, intermediate results can be left in
Montgomery form, and the initial and final conversions become a negligible fraction of the
overall computation. Many important cryptosystems such as RSA and Diffie–Hellman key
exchange are based on arithmetic operations modulo a large number, and for these
cryptosystems, the computation by Montgomery multiplication is faster than the available
alternatives.[3]
1.7Modular Exponentiation

Modular Exponentiation (Power in Modular Arithmetic)

Given three numbers x, y and p, compute (xy) % p.

Examples :

Input: x = 2, y = 3, p = 5
Output: 3
Explanation: 2^3 % 5 = 8 % 5 = 3.

Input: x = 2, y = 5, p = 13
Output: 6
Explanation: 2^5 % 13 = 32 % 13 = 6.

Recommended: Please solve it on “PRACTICE ” first, before moving on to the solution.

We have discussed recursive and iterative solutions for power.

1.8 LINEAR CONGRUENCE THEOREM


In ordinary algebra, an equation of the formax=b(whereaandbaregiven real numbers) is called a
linear equation, and its solutionx=b/aisobtained by multiplying both sides of the equation bya−1=
1/a.The subject of this lecture is how to solve anylinear congruenceax≡b(modm)wherea, bare
given integers andmis a given positive integer.For a simple example, you can easily check by
inspection that the linearcongruence6x≡4 (mod 10)has solutionsx= 4,9. Already we see a
difference from ordinary algebra:linear congruences can have more than one solution!Are these
theONLYsolutions? No. In fact, any integer which is congruentto either 4 or 9 mod 10 is also a
solution. You should check this for yourselfnow.So any integer of the form 4 + 10kor of the form
9 + 10kwherek∈Zisa solution to the given linear congruence. The above linear congruence
hasinfinitely manyinteger solutions.The is a general principle at work here.Solutions to linear
congruences arealways entire congruence classes.If any member of the congruence classis a
solution, then all members are. This is a simple consequence of theproperties of congruences
proved in a previous lecture.This means that although the congruence 6x≡4 (mod 10) had
infinitelymany integer solutions, the solutions fall into congruence classes, and thereare only two
of those: [4]10and [9]10.Whenever a linear congruence has any solutions, it has infinitely many.
Thesolutions fall into congruence classes, and there are only a finite number ofcongruence
classes that solve the congruence

1.9. METHOD OF SUCCESSIVE SUBSTITUTION

In modular arithmetic, the method of successive substitution is a method of solving problems


of simultaneous congruences by using the definition of the congruence equation. It is commonly
applied in cases where the conditions of the Chinese remainder theorem are not satisfied.

There is also an unrelated numerical-analysis method of successive substitution, a randomized


algorithm used for root finding, an application of fixed-point iteration.

The method of successive substitution is also known as back substitution.

Example One

Consider the simple set of simultaneous congruences

x ≡ 3 (mod 4)
x ≡ 5 (mod 6)

Now, for x ≡ 3 (mod 4) to be true, x = 3 + 4j for some integer j. Substitute this in the second
equation

3+4j ≡ 5 (mod 6)

since we are looking for a solution to both equations.

Subtract 3 from both sides (this is permitted in modular arithmetic)

4j ≡ 2 (mod 6)

We simplify by dividing by the greatest common divisor of 4,2 and 6. Division by 2 yields:

2j ≡ 1 (mod 3)

The Euclidean modular multiplicative inverse of 2 mod 3 is 2. After multiplying both sides with
the inverse, we obtain:

j ≡ 2 × 1 (mod 3)
or
j ≡ 2 (mod 3)
For the above to be true: j = 2 + 3k for some integer k. Now substitute back into 3 + 4j and we obtain
x = 3 + 4(2 + 3k)
Expand:
x = 11 + 12k
to obtain the solution
x ≡ 11 (mod 12)
Example 2 (An Easier Method)

Although the method utilized in the preceding example is coherent, it does not work for every
problem. Consider these four systems of congruences:

 x ≡ 1 (mod 2)
 x ≡ 2 (mod 3)
 x ≡ 3 (mod 5)
 x ≡ 4 (mod 11)

In order to proceed in finding an expression that represents all the solutions that satisfies this
system of linear congruences, it is important to know that a (mod b) has an analogous identity:

o a (mod b) ⇔ bk + a, ∀k ∈ Z, where k is an arbitrary constant.

PROCEDURE

1. Begin by rewriting the first congruence as an equation:

 x = 2a + 1, ∀a ∈ Z

2. Rewrite the second congruence as an equation, and set the equation found in the first step
equal to this equation, since x will substitute the x in the second congruence:

 x ≡ 2 (mod 3)
 x = 2a + 1 ≡ 2 (mod 3)
 2a ≡ 1 (mod 3)
 a ≡ 2⁻¹ (mod 3)
 a = -1.
 Linear Congruence Theorem
1. CHINESE REMINDER THEOREM

In number theory, the Chinese remainder theorem states that if one knows the remainders of
the Euclidean division of an integer n by several integers, then one can determine uniquely the
remainder of the division of n by the product of these integers, under the condition that the
divisors are pairwise coprime.

The earliest known statement of the theorem is by the Chinese mathematician Sunzi in Sunzi
Suanjing in the 3rd century AD.

The Chinese remainder theorem is widely used for computing with large integers, as it allows
replacing a computation for which one knows a bound on the size of the result by several similar
computations on small integers.

The Chinese remainder theorem (expressed in terms of congruences) is true over every principal
ideal domain. It has been generalized to any commutative ring, with a formulation involving
ideals.

History

The earliest known statement of the theorem, as a problem with specific numbers, appears in the
3rd-century book Sunzi Suanjing by the Chinese mathematician Sunzi:[1]

There are certain things whose number is unknown. If we count them by threes, we have two left
over; by fives, we have three left over; and by sevens, two are left over. How many things are
there?[2]

Sunzi's work contains neither a proof nor a full algorithm.[3] What amounts to an algorithm for
solving this problem was described by Aryabhata (6th century).[4] Special cases of the Chinese
remainder theorem were also known to Brahmagupta (7th century), and appear in Fibonacci's
Liber Abaci (1202).[5] The result was later generalized with a complete solution called Dayanshu

(大衍術) in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections (數書九章, Shushu
Jiuzhang)[6] which was translated into English in early 19th century by British missionary
Alexander Wylie.[7]

The Chinese remainder theorem appears in Gauss's 1801 book Disquisitiones Arithmeticae.[8]

The notion of congruences was first introduced and used by Gauss in his Disquisitiones
Arithmeticae of 1801.[9] Gauss illustrates the Chinese remainder theorem on a problem involving
calendars, namely, "to find the years that have a certain period number with respect to the solar
and lunar cycle and the Roman indiction."[10] Gauss introduces a procedure for solving the
problem that had already been used by Euler but was in fact an ancient method that had appeared
several times.[11]

Theorem statement

Let n1, ..., nk be integers greater than 1, which are often called moduli or divisors. Let us denote
by N the product of the ni.

The Chinese remainder theorem asserts that if the ni are pairwise coprime, and if a1, ..., ak are
integers such that 0 ≤ ai < ni for every i, then there is one and only one integer x, such that 0 ≤ x
< N and the remainder of the Euclidean division of x by ni is ai for every i.

This may be restated as follows in term of congruences: If the ni are pairwise coprime, and if a1,
..., ak are any integers, then there exists an integer x such that

and any two such x are congruent modulo N.[12]

In abstract algebra, the theorem is often restated as: if the ni are pairwise coprime, the map

defines a ring isomorphism[13]

between the ring of integers modulo N and the direct product of the rings of integers modulo the

ni. This means that for doing a sequence of arithmetic operations in one may do the same

computation independently in each and then get the result by applying the isomorphism
(from the right to the left). This may be much faster than the direct computation if N and the
number of operations are large. This is widely used, under the name multi-modular computation,
for linear algebra over the integers or the rational numbers.

The theorem can also be restated in the language of combinatorics as the fact that the infinite
arithmetic progressions of integers form a Helly family.[14]

FERMAT’S LITTLE THEOREM


From Wikipedia, the free encyclopedia

Jump to navigation

Jump to search

For other theorems named after Pierre de Fermat, see Fermat's theorem.

Fermat's little theorem states that if p is a prime number, then for any integer a, the number ap − a is an
integer multiple of p. In the notation of modular arithmetic, this is expressed as

a p ≡ a ( mod p ) . {\displaystyle a^{p}\equiv a{\pmod {p}}.} a^p \equiv a \pmod p.

For example, if a = 2 and p = 7, then 27 = 128, and 128 − 2 = 126 = 7 × 18 is an integer multiple of 7.

If a is not divisible by p, Fermat's little theorem is equivalent to the statement that ap − 1 − 1 is an integer
multiple of p, or in symbols:[1][2]

a p − 1 ≡ 1 ( mod p ) . {\displaystyle a^{p-1}\equiv 1{\pmod {p}}.} a^{p-1} \equiv 1 \pmod p.

For example, if a = 2 and p = 7, then 26 = 64, and 64 − 1 = 63 = 7 × 9 is thus a multiple of 7.

Fermat's little theorem is the basis for the Fermat primality test and is one of the fundamental results of
elementary number theory. The theorem is named after Pierre de Fermat, who stated it in 1640. It is called
the "little theorem" to distinguish it from Fermat's last theorem.

METHOD OF SUCCESSIVE SUBSTITUTION

In modular arithmetic, the method of successive substitution is a method of solving


problems of simultaneous congruences by using the definition of the congruence
equation. It is commonly applied in cases where the conditions of the Chinese remainder
theorem are not satisfied.
There is also an unrelated numerical-analysis method of successive substitution, a
randomized algorithm used for root finding, an application of fixed-point iteration.
The method of successive substitution is also known as back substitution.

Example One

Consider the simple set of simultaneous congruences

x ≡ 3 (mod 4)

x ≡ 5 (mod 6)

Now, for x ≡ 3 (mod 4) to be true, x = 3 + 4j for some integer j. Substitute this in the second
equation

3+4j ≡ 5 (mod 6)

since we are looking for a solution to both equations.

Subtract 3 from both sides (this is permitted in modular arithmetic)

4j ≡ 2 (mod 6)

We simplify by dividing by the greatest common divisor of 4,2 and 6. Division by 2 yields:

2j ≡ 1 (mod 3)

The Euclidean modular multiplicative inverse of 2 mod 3 is 2. After multiplying both sides with
the inverse, we obtain:

j ≡ 2 × 1 (mod 3)

or

j ≡ 2 (mod 3)

For the above to be true: j = 2 + 3k for some integer k. Now substitute back into 3 + 4j and we
obtain
x = 3 + 4(2 + 3k)

Expand:

x = 11 + 12k

to obtain the solution

x ≡ 11 (mod 12)

Example 2 (An Easier Method)

Although the method utilized in the preceding example is coherent, it does not work for every
problem.

Consider these four systems of congruences:

 x ≡ 1 (mod 2)
 x ≡ 2 (mod 3)
 x ≡ 3 (mod 5)
 x ≡ 4 (mod 11)

In order to proceed in finding an expression that represents all the solutions that satisfies this
system of linear congruences, it is important to know that a (mod b) has an analogous identity:


o a (mod b) ⇔ bk + a, ∀k ∈ Z, where k is an arbitrary constant.

PROCEDURE

1. Begin by rewriting the first congruence as an equation:

 x = 2a + 1, ∀a ∈ Z

2. Rewrite the second congruence as an equation, and set the equation found in the first step
equal to this equation, since x will substitute the x in the second congruence:

 x ≡ 2 (mod 3)
 x = 2a + 1 ≡ 2 (mod 3)
 2a ≡ 1 (mod 3)
 a ≡ 2⁻¹ (mod 3)
 a = -1.

Because a must be a positive nonnegative inverse, we need a positive a. Thus, we add whatever
our current modulus is to a, which is a = -1 + 3 = 2.

3. We now rewrite a = 2 in terms of our current modulus:


 a = 2 (mod 3)
 ∴ a = 3b + 2

4. We now substitute our current value of a into our equation that we found in step 1 with
respect to x:

 x = 2a + 1
 = 2(3b + 2) + 1, ∀b ∈ Z
 = 6b + 5.

∴ x = 6b + 5.

2. FERMAT QUOTIENT
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In number theory, the Fermat quotient of an integer a with respect to an odd prime p is defined
as:[1][2][3][4]

This article is about the former. For the latter see p-derivation.

If the base a is coprime to the exponent p then Fermat's little theorem says that qp(a) will be an
integer. The quotient is named after Pierre de Fermat.

Properties
From the definition, it is obvious that

In 1850 Gotthold Eisenstein proved that if a and b are both coprime to p, then:[5]

Eisenstein likened the first two of these congruences to properties of logarithms. These
properties imply

In 1895 Dmitry Mirimanoff pointed out that an iteration of Eisenstein's rules gives the
corollary:[6]

From this, it follows that:[7]

Lerch's formula
M. Lerch proved in 1905 that[8][9][10]

Here is the Wilson quotient.


Special values
Eisenstein discovered that the Fermat quotient with base 2 could be expressed in terms of the
sum of the reciprocals mod p of the numbers lying in the first half of the range {1, p − 1}:

Later writers showed that the number of terms required in such a representation could be reduced
from 1/2 to 1/4, 1/5, or even 1/6:

Eisenstein's series also has an increasingly complex connection to the Fermat quotients with
other bases, the first few examples being:

Euler's totient function

In number theory, Euler's totient function counts the positive integers up to a given
integer n that are relatively prime to n. It is written using the Greek letter phi as φ(n) or ϕ(n), and
may also be called Euler's phi function. In other words, it is the number of integers k in the range
1 ≤ k ≤ n for which the greatest common divisor gcd(n, k) is equal to 1.[2][3] The integers k of this
form are sometimes referred to as totatives of n.

For example, the totatives of n = 9 are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively
prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, because gcd(9, 3) =
gcd(9, 6) = 3 and gcd(9, 9) = 9. Therefore, φ(9) = 6. As another example, φ(1) = 1 since for n = 1
the only integer in the range from 1 to n is 1 itself, and gcd(1, 1) = 1.

Euler's totient function is a multiplicative function, meaning that if two numbers m and n are
relatively prime, then φ(mn) = φ(m)φ(n).[4][5] This function gives the order of the multiplicative
group of integers modulo n (the group of units of the ring ℤ/nℤ).[6] It is also used for defining the
RSA encryption system.

NONCOTOTIENT
In mathematics, a noncototient is a positive integer n that cannot be expressed as the difference
between a positive integer m and the number of coprime integers below it. That is, m − φ(m) = n,
where φ stands for Euler's totient function, has no solution for m. The cototient of n is defined as
n − φ(n), so a noncototient is a number that is never a cototient.

It is conjectured that all noncototients are even. This follows from a modified form of the slightly
stronger version of the Goldbach conjecture: if the even number n can be represented as a sum of
two distinct primes p and q, then

It is expected that every even number larger than 6 is a sum of two distinct primes, so probably
no odd number larger than 5 is a noncototient. The remaining odd numbers are covered by the

observations and .

For even numbers, it can be shown

Thus, all even numbers n such that n+2 can be written as (p+1)*(q+1) with p, q primes are
cototients.

The first few noncototients are

10, 26, 34, 50, 52, 58, 86, 100, 116, 122, 130, 134, 146, 154, 170, 172, 186, 202, 206, 218, 222,
232, 244, 260, 266, 268, 274, 290, 292, 298, 310, 326, 340, 344, 346, 362, 366, 372, 386, 394,
404, 412, 436, 466, 470, 474, 482, 490, ... (sequence A005278 in the OEIS)

The cototient of n are

0, 1, 1, 2, 1, 4, 1, 4, 3, 6, 1, 8, 1, 8, 7, 8, 1, 12, 1, 12, 9, 12, 1, 16, 5, 14, 9, 16, 1, 22, 1, 16,


13, 18, 11, 24, 1, 20, 15, 24, 1, 30, 1, 24, 21, 24, 1, 32, 7, 30, 19, 28, 1, 36, 15, 32, 21, 30,
1, 44, 1, 32, 27, 32, 17, 46, 1, 36, 25, 46, 1, 48, ... (sequence A051953 in the OEIS)

Least k such that the cototient of k is n are (start with n = 0, 0 if no such k exists)

1, 2, 4, 9, 6, 25, 10, 15, 12, 21, 0, 35, 18, 33, 26, 39, 24, 65, 34, 51, 38, 45, 30, 95, 36, 69,
0, 63, 52, 161, 42, 87, 48, 93, 0, 75, 54, 217, 74, 99, 76, 185, 82, 123, 60, 117, 66, 215,
72, 141, 0, ... (sequence A063507 in the OEIS)
Greatest k such that the cototient of k is n are (start with n = 0, 0 if no such k exists)

1, ∞, 4, 9, 8, 25, 10, 49, 16, 27, 0, 121, 22, 169, 26, 55, 32, 289, 34, 361, 38, 85, 30, 529,
46, 133, 0, 187, 52, 841, 58, 961, 64, 253, 0, 323, 68, 1369, 74, 391, 76, 1681, 82, 1849,
86, 493, 70, 2209, 94, 589, 0, ... (sequence A063748 in the OEIS)

Number of ks such that k-φ(k) is n are (start with n = 0)

1, ∞, 1, 1, 2, 1, 1, 2, 3, 2, 0, 2, 3, 2, 1, 2, 3, 3, 1, 3, 1, 3, 1, 4, 4, 3, 0, 4, 1, 4, 3, 3, 4, 3, 0, 5, 2, 2, 1,
4, 1, 5, 1, 4, 2, 4, 2, 6, 5, 5, 0, 3, 0, 6, 2, 4, 2, 5, 0, 7, 4, 3, 1, 8, 4, 6, 1, 3, 1, 5, 2, 7, 3, ... (sequence
A063740 in the OEIS)

Erdős (1913-1996) and Sierpinski (1882-1969) asked whether there exist infinitely many
noncototients. This was finally answered in the affirmative by Browkin and Schinzel (1995),

who showed every member of the infinite family is an example (See Riesel number). Since
then other infinite families, of roughly the same form, have been given by Flammenkamp and
Luca (2000).

NONTOTIENT

In number theory, a nontotient is a positive integer n which is not a totient number: it is not in
the range of Euler's totient function φ, that is, the equation φ(x) = n has no solution x. In other
words, n is a nontotient if there is no integer x that has exactly n coprimes below it. All odd
numbers are nontotients, except 1, since it has the solutions x = 1 and x = 2. The first few even
nontotients are

14, 26, 34, 38, 50, 62, 68, 74, 76, 86, 90, 94, 98, 114, 118, 122, 124, 134, 142, 146, 152,
154, 158, 170, 174, 182, 186, 188, 194, 202, 206, 214, 218, 230, 234, 236, 242, 244, 246,
248, 254, 258, 266, 274, 278, 284, 286, 290, 298, ... (sequence A005277 in the OEIS)

Least k such that the totient of k is n are (0 if no such k exists)


1, 3, 0, 5, 0, 7, 0, 15, 0, 11, 0, 13, 0, 0, 0, 17, 0, 19, 0, 25, 0, 23, 0, 35, 0, 0, 0, 29, 0, 31, 0,
51, 0, 0, 0, 37, 0, 0, 0, 41, 0, 43, 0, 69, 0, 47, 0, 65, 0, 0, 0, 53, 0, 81, 0, 87, 0, 59, 0, 61, 0,
0, 0, 85, 0, 67, 0, 0, 0, 71, 0, 73, ... (sequence A049283 in the OEIS)

Greatest k such that the totient of k is n are (0 if no such k exists)

2, 6, 0, 12, 0, 18, 0, 30, 0, 22, 0, 42, 0, 0, 0, 60, 0, 54, 0, 66, 0, 46, 0, 90, 0, 0, 0, 58, 0, 62,
0, 120, 0, 0, 0, 126, 0, 0, 0, 150, 0, 98, 0, 138, 0, 94, 0, 210, 0, 0, 0, 106, 0, 162, 0, 174, 0,
118, 0, 198, 0, 0, 0, 240, 0, 134, 0, 0, 0, 142, 0, 270, ... (sequence A057635 in the OEIS)

Number of ks such that φ(k) = n are (start with n = 0)

0, 2, 3, 0, 4, 0, 4, 0, 5, 0, 2, 0, 6, 0, 0, 0, 6, 0, 4, 0, 5, 0, 2, 0, 10, 0, 0, 0, 2, 0, 2, 0, 7, 0, 0,
0, 8, 0, 0, 0, 9, 0, 4, 0, 3, 0, 2, 0, 11, 0, 0, 0, 2, 0, 2, 0, 3, 0, 2, 0, 9, 0, 0, 0, 8, 0, 2, 0, 0, 0,
2, 0, 17, ... (sequence A014197 in the OEIS)
EULER AND WILSON’S THEOREM
The defining characteristic of Un is that every element has a unique multiplicative
inverse. It is quite possible for an element of Un to be its own inverse; for example, in
U12, [1]2=[11]2=[5]2=[7]2=[1]. This stands in contrast to arithmetic in Z or R, where the
only solutions to x2=1 are ±1. If n is prime, then this familiar fact is true in Un

as well.

Theorem 3.10.1 If p
is a prime, the only elements of Up which are their own inverses are [1] and [p−1]=[−1]

Proof.
Note that [n]
is its own inverse if and only if [n2]=[n]2=[1] if and only if n2≡1(modp) if and only if
p|(n2−1)=(n−1)(n+1). This is true if and only if p|(n−1) or p|(n+1). In the first case,
n≡1(modp), i.e., [n]=[1]. In the second case, n≡−1≡p−1(modp), i.e., [n]=[p−1]

If p
is prime, Up={[1],[2],…,[p−1]}. The elements [2], [3], …, [p−2] all have inverses
different from themselves, so it must be possible to pair up each element in this list with
its inverse from the list. This means that if we multiply all of [2], [3], …, [p−2] together,
we must get [1].
3. Primitive Root Modulation

In modular arithmetic, a branch of number theory, a number g is a primitive root modulo n if


every number a coprime to n is congruent to a power of g modulo n. That is, g is a primitive root
modulo n if for every integer a coprime to n, there is an integer k such that gk ≡ a (mod n). Such
a value k is called the index or discrete logarithm of a to the base g modulo n. Note that g is a
primitive root modulo n if and only if g is a generator of the multiplicative group of integers
modulo n.

Gauss defined primitive roots in Article 57 of the Disquisitiones Arithmeticae (1801), where he
credited Euler with coining the term. In Article 56 he stated that Lambert and Euler knew of
them, but he was the first to rigorously demonstrate that primitive roots exist for a prime n. In
fact, the Disquisitiones contains two proofs: the one in Article 54 is a nonconstructive existence
proof, while the other in Article 55 is constructive.

E.MULTIPLICATIVE ORDER

In number theory, given an integer a and a positive integer n with gcd(a,n) = 1, the
multiplicative order of a modulo n is the smallest positive integer k with

In other words, the multiplicative order of a modulo n is the order of a in the multiplicative
group of the units in the ring of the integers modulo n.

The order of a modulo n is usually written as:

DISCREATE LOGARITHM

In the mathematics of the real numbers, the logarithm logb a is a number x such that bx = a, for
given numbers a and b. Analogously, in any group G, powers bk can be defined for all integers k,
and the discrete logarithm logb a is an integer k such that bk = a. In number theory, the more
commonly used term is index: we can write x = indr a (mod m) (read the index of a to the base r
modulo m) for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1.
Discrete logarithms are quickly computable in a few special cases. However, no efficient method
is known for computing them in general. Several important algorithms in public-key
cryptography base their security on the assumption that the discrete logarithm problem over
carefully chosen groups has no efficient solution.

4. QUADRATIC RESIDUE
5. If is an arbitrary integer relatively prime to and is a primitive root of , then there
exists among the numbers 0, 1, 2, ..., , where is the totient function, exactly
one number such that

6. The number is then called the discrete logarithm of with respect to the base modulo
and is denoted

7. The term "discrete logarithm" is most commonly used in cryptography, although the term
"generalized multiplicative order" is sometimes used as well (Schneier 1996, p. 501). In
number theory, the term "index" is generally used instead (Gauss 1801; Nagell 1951,
p. 112).
8. For example, the number 7 is a positive primitive root of (in fact, the set of
primitive roots of 41 is given by 6, 7, 11, 12, 13, 15, 17, 19, 22, 24, 26, 28, 29, 30, 34,
35), and since , the number 15 has multiplicative order 3 with respect to
base 7 (modulo 41) (Nagell 1951, p. 112). The generalized multiplicative order is
implemented in the Wolfram Language as MultiplicativeOrder[g, n, a1 ], or more
generally as MultiplicativeOrder[g, n, a1, a2, ... ].
9. Discrete logarithms were mentioned by Charlie the math genius in the Season 2 episode
"In Plain Sight" of the television crime drama NUMB3RS.

Euler’s Criterion
In number theory, Euler's criterion is a formula for determining whether an integer is a
quadratic residue modulo a prime. Precisely,

Let p be an odd prime and a be an integer coprime to p. Then[1][2]

Euler's criterion can be concisely reformulated using the Legendre symbol:[3]

The criterion first appeared in a 1748 paper by Leonhard Euler.[4]

Proof

The proof uses the fact that the residue classes modulo a prime number are a field. See the article
prime field for more details.

Because the modulus is prime, Lagrange's theorem applies: a polynomial of degree k can only
have at most k roots. In particular, x2 ≡ a (mod p) has at most 2 solutions for each a. This
immediately implies that besides 0, there are at least p − 1/2 distinct quadratic residues modulo
p: each of the p − 1 possible values of x can only be accompanied by one other to give the same
residue.

In fact, This is because So, the distinct quadratic residues are:

As a is coprime to p, Fermat's little theorem says that

which can be written as

Since the integers mod p form a field, for each a, one or the other of these factors must be zero.

Now if a is a quadratic residue, a ≡ x2,

So every quadratic residue (mod p) makes the first factor zero.


Applying Lagrange's theorem again, we note that there can be no more than p − 1/2 values of a
that make the first factor zero. But as we noted at the beginning, there are at least p − 1/2 distinct
quadratic residues (mod p) (besides 0). Therefore they are precisely the residue classes that make
the first factor zero. The other p − 1/2 residue classes, the nonresidues, must make the second
factor zero, or they would not satisfy Fermat's little theorem. This is Euler's criterion.

Examples

Example 1: Finding primes for which a is a residue

Let a = 17. For which primes p is 17 a quadratic residue?

We can test prime p's manually given the formula above.

In one case, testing p = 3, we have 17(3 − 1)/2 = 171 ≡ 2 ≡ −1 (mod 3), therefore 17 is not a
quadratic residue modulo 3.

In another case, testing p = 13, we have 17(13 − 1)/2 = 176 ≡ 1 (mod 13), therefore 17 is a quadratic
residue modulo 13. As confirmation, note that 17 ≡ 4 (mod 13), and 22 = 4.

We can do these calculations faster by using various modular arithmetic and Legendre symbol
properties.

If we keep calculating the values, we find:

(17/p) = +1 for p = {13, 19, ...} (17 is a quadratic residue modulo these values)
(17/p) = −1 for p = {3, 5, 7, 11, 23, ...} (17 is not a quadratic residue modulo these
values).

Example 2: Finding residues given a prime modulus p

Which numbers are squares modulo 17 (quadratic residues modulo 17)?

We can manually calculate it as:

12 = 1
22 = 4
32 = 9
42 = 16
52 = 25 ≡ 8 (mod 17)
62 = 36 ≡ 2 (mod 17)
72 = 49 ≡ 15 (mod 17)
82 = 64 ≡ 13 (mod 17).

So the set of the quadratic residues modulo 17 is {1,2,4,8,9,13,15,16}. Note that we did not need
to calculate squares for the values 9 through 16, as they are all negatives of the previously
squared values (e.g. 9 ≡ −8 (mod 17), so 92 ≡ (−8)2 = 64 ≡ 13 (mod 17)).

We can find quadratic residues or verify them using the above formula. To test if 2 is a quadratic
residue modulo 17, we calculate 2(17 − 1)/2 = 28 ≡ 1 (mod 17), so it is a quadratic residue. To test if
3 is a quadratic residue modulo 17, we calculate 3(17 − 1)/2 = 38 ≡ 16 ≡ −1 (mod 17), so it is not a
quadratic residue.

Euler's criterion is related to the Law of quadratic reciprocity.


LEGENDRE SYMBOL

In number theory, the Legendre symbol is a multiplicative function with values 1, −1, 0
that is a quadratic character modulo an odd prime number p: its value at a (nonzero)
quadratic residue mod p is 1 and at a non-quadratic residue (non-residue) is −1. Its value
at zero is 0.

The Legendre symbol was introduced by Adrien-Marie Legendre in 1798[1] in the course
of his attempts at proving the law of quadratic reciprocity. Generalizations of the symbol
include the Jacobi symbol and Dirichlet characters of higher order. The notational
convenience of the Legendre symbol inspired introduction of several other "symbols"
used in algebraic number theory, such as the Hilbert symbol and the Artin symbol.

1. GAUSS’S LEMMA (NUMBER THEOREM)


This article is about Gauss's lemma in number theory. For other uses, see Gauss's lemma.
Gauss's lemma in number theory gives a condition for an integer to be a quadratic residue.
Although it is not useful computationally, it has theoretical significance, being involved in
some proofs of quadratic reciprocity.
It made its first appearance in Carl Friedrich Gauss's third proof (1808)[1]:458–462 of quadratic
reciprocity and he proved it again in his fifth proof (1818).[1]:496–501

Statement of the lemma

For any odd prime p let a be an integer that is coprime to p.

Consider the integers

and their least positive residues modulo p. (These residues are all distinct, so there are (p − 1)/2
of them.)

Let n be the number of these residues that are greater than p/2. Then

Where is the Legendre symbol.

Example

Taking p = 11 and a = 7, the relevant sequence of integers is


7, 14, 21, 28, 35.

After reduction modulo 11, this sequence becomes

7, 3, 10, 6, 2.

Three of these integers are larger than 11/2 (namely 6, 7 and 10), so n = 3. Correspondingly
Gauss's lemma predicts that

This is indeed correct, because 7 is not a quadratic residue modulo 11.

The above sequence of residues

7, 3, 10, 6, 2

may also be written

−4, 3, −1, −5, 2.

In this form, the integers larger than 11/2 appear as negative numbers. It is also apparent that the
absolute values of the residues are a permutation of the residues

1, 2, 3, 4, 5.

Proof

A fairly simple proof,[1]:458–462 reminiscent of one of the simplest proofs of Fermat's little
theorem, can be obtained by evaluating the product

modulo p in two different ways. On one hand it is equal to

The second evaluation takes more work. If x is a nonzero residue modulo p, let us define the
"absolute value" of x to be

Since n counts those multiples ka which are in the latter range, and since for those multiples, −ka
is in the first range, we have Now observe that the values |ra| are distinct for r = 1, 2, …, (p −
1)/2.
Chapter III: Modular Arithmetic
7. Introduction
8. Definition (with examples)
9. Rules of Modular Arithmetic
c) Addition
d) Multiplication
10. Reminders
11. Standard Representation
12. Modular Operations
DEFINITION (WITH EXAMPLES)
What Is Modular Arithmetic?
When we divide two integers we will have an equation that looks like the following:
AB=Q remainder R \dfrac{A}{B} = Q \text{ remainder } R BA=Q remainder Rstart fraction, A,
divided by, B, end fraction, equals, Q, start text, space, r, e, m, a, i, n, d, e, r, space, end text, R

A A AA is the dividend
B B BB is the divisor
Q Q QQ is the quotient
R R RR is the remainder
Sometimes, we are only interested in what the remainder is when we divide A A AA by B B
BB.
For these cases there is an operator called the modulo operator (abbreviated as mod).
Using the same A A AA, B B BB, Q Q QQ, and R R RR as above, we would have: A mod B=R
A \text{ mod } B = R A mod B=RA, start text, space, m, o, d, space, end text, B, equals, R
We would say this as A A AA modulo B B BB is equal to R R RR. Where B B BB is referred to
as the modulus.
For example:

Visualize modulus with clocks

Observe what happens when we increment numbers by one and then divide them by 3.
The remainders start at 0 and increases by 1 each time, until the number reaches one less than the
number we are dividing by. After that, the sequence repeats.
By noticing this, we can visualize the modulo operator by using circles.
We write 0 at the top of a circle and continuing clockwise writing integers 1, 2, ... up to one less
than the modulus.
For example, a clock with the 12 replaced by a 0 would be the circle for a modulus of 12.
o find the result of A mod B A \text{ mod } B A mod BA, start text, space, m, o, d, space, end
text, B we can follow these steps:

1. Construct this clock for size B B BB


2. Start at 0 and move around the clock A A AA steps
3. Wherever we land is our solution.
(If the number is positive we step clockwise, if it's negative we step counter-clockwise.)

Examples
8 mod 4=? 8 \text{ mod } 4 = ? 8 mod 4=?8, start text, space, m, o, d, space, end
text, 4, equals, question mark

With a modulus of 4 we make a clock with numbers 0, 1, 2, 3.


We start at 0 and go through 8 numbers in a clockwise sequence 1, 2, 3, 0, 1, 2, 3, 0.

We ended up at 0 so 8 mod 4=0 8 \text{ mod } 4 = \bf{0} 8 mod 4=08, start text, space, m, o, d,
space, end text, 4, equals, 0.

7 mod 2=? 7 \text{ mod } 2 = ? 7 mod 2=?7, start text, space, m, o, d, space, end
text, 2, equals, question mark

With a modulus of 2 we make a clock with numbers 0, 1.


We start at 0 and go through 7 numbers in a clockwise sequence 1, 0, 1, 0, 1, 0, 1.
We ended up at 1 so 7 mod 2=1 7 \text{ mod } 2 = \bf{1} 7 mod 2=17, start text, space, m, o, d,
space, end text, 2, equals, 1.

−5 mod 3=? -5 \text{ mod } 3 = ? −5 mod 3=?minus, 5, start text, space, m, o, d,


space, end text, 3, equals, question mark

With a modulus of 3 we make a clock with numbers 0, 1, 2.


We start at 0 and go through 5 numbers in counter-clockwise sequence (5 is negative) 2, 1, 0, 2,
1.

We ended up at 1 so −5 mod 3=1 -5 \text{ mod } 3 = \bf{1} −5 mod 3=1minus, 5, start text,
space, m, o, d, space, end text, 3, equals, 1.
RULES OF MODULAR ARITHMETIC
If we have A mod B A \text{ mod } B A mod BA, start text, space, m, o, d, space, end text,
B and we increase A A AA by a multiple of B \bf{B} BB, we will end up in the same spot, i.e.
A mod B=(A+K⋅B) mod B A \text{ mod } B = (A + K \cdot B) \text{ mod } B
A mod B=(A+K⋅B) mod BA, start text, space, m, o, d, space, end text, B, equals, left parenthesis,
A, plus, K, dot, B, right parenthesis, start text, space, m, o, d, space, end text, B for any integer
K \bf{K} KK.

For example:

Notes to the Reader

mod in programming languages and calculators

Many programming languages, and calculators, have a mod operator, typically represented with
the % symbol. If you calculate the result of a negative number, some languages will give you a
negative result.
e.g.
-5 % 3 = -2.

Congruence Modulo

You may see an expression like:


A≡B (mod C) A \equiv B\ (\text{mod } C) A≡B (mod C)A, \equiv, B, space, left parenthesis,
start text, m, o, d, space, end text, C, right parenthesis

This says that A A AA is congruent to B B BB modulo C C CC. It is similar to the expressions


we used here, but not quite the same.
In the next article we will explain what it means and how it is related to the expressions above.
In math, a number is said to be divisible by another number if the
is 0.

Divisibility rules are a set of general rules that are often used to determine whether or not a
number is evenly divisible by another number.
Divisibility Rules Example
2: If the number is even or end in 0,2,4, 6 or 8,
it is divisible by 2.
3: If the sum of all of the digits is divisible by
three, the number is divisible by 3.
4: If the number formed by the last two digits is
divisible by 4, the number is divisible by 4.
5: If the last digit is a 0 or 5, the number is
divisible by 5.

6: If a number is divisible by both three and


two, it is divisible by 6.

7: If the difference of last digit doubled and the


rest of the digits is divisible by seven, the
number is divisible by 7.
8: If the last three digits of a number are
divisible by 8, the number is divisible by 8.
9: If the sum of the digits is divisible by nine,
the number is divisible by 9.
10: If the last digit of the number is 0, it is
divisible by 10.

Fun Facts

 Every number is divisible by 1.


 When a number is divisible by another number, then it is also divisible by each of the
factors of that number. For instance, a number divisible by 6, will also be divisible by 2
and 3.
REMINDER
In mathematics, the remainder is the amount "left over" after performing some
computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by
another to produce an integer quotient (integer division). In algebra, the remainder is the
polynomial "left over" after dividing one polynomial by another. The modulo operation is the
operation that produces such a remainder when given a dividend and divisor.

Formally it is also true that a remainder is what is left after subtracting one number from another,
although this is more precisely called the difference. This usage can be found in some elementary
textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back
and keep the rest."[1] However, the term "remainder" is still used in this sense when a function is
approximated by a series expansion and the error expression ("the rest") is referred to as the
remainder term.

Integer division
If a and d are integers, with d non-zero, it can be proven that there exist unique integers q and r,
such that a = qd + r and 0 ≤ r < |d|. The number q is called the quotient, while r is called the
remainder.

See Euclidean division for a proof of this result and division algorithm for algorithms describing
how to calculate the remainder.

The remainder, as defined above, is called the least positive remainder or simply the
remainder.[2] The integer a is either a multiple of d or lies in the interval between consecutive
multiples of d, namely, q⋅d and (q + 1)d (for positive q).

At times it is convenient to carry out the division so that a is as close as possible to an integral
multiple of d, that is, we can write

a = k⋅d + s, with |s| ≤ |d/2| for some integer k.

In this case, s is called the least absolute remainder.[3] As with the quotient and remainder, k and
s are uniquely determined except in the case where d = 2n and s = ± n. For this exception we
have,

a = k⋅d + n = (k + 1)d − n.
A unique remainder can be obtained in this case by some convention such as always taking the
positive value of s.

EXAMPLES
In the division of 43 by 5 we have:

43 = 8 × 5 + 3,

so 3 is the least positive remainder. We also have,

43 = 9 × 5 − 2,

and −2 is the least absolute remainder.

These definitions are also valid if d is negative, for example, in the division of 43 by −5,

43 = (−8) × (−5) + 3,

and 3 is the least positive remainder, while,

43 = (−9) × (−5) + (−2)

and −2 is the least absolute remainder.

In the division of 42 by 5 we have:

42 = 8 × 5 + 2,

and since 2 < 5/2, 2 is both the least positive remainder and the least absolute remainder.

In these examples, the (negative) least absolute remainder is obtained from the least positive
remainder by subtracting 5, which is d. This holds in general. When dividing by d, either both
remainders are positive and therefore equal, or they have opposite signs. If the positive
remainder is r1, and the negative one is r2, then

r1 = r2 + d.

FOR FLOATING-POINT NUMBERS

When a and d are floating-point numbers, with d non-zero, a can be divided by d without
remainder, with the quotient being another floating-point number. If the quotient is constrained
to being an integer, however, the concept of remainder is still necessary. It can be proved that
there exists a unique integer quotient q and a unique floating-point remainder r such that
a = qd + r with 0 ≤ r < |d|.

Extending the definition of remainder for floating-point numbers as described above is not of
theoretical importance in mathematics; however, many programming languages implement this
definition, see modulo operation.

IN PROGRAMMING LANGUAGES
Main article: Modulo operation

While there are no difficulties inherent in the definitions, there are implementation issues that
arise when negative numbers are involved in calculating remainders. Different programming
languages have adopted different conventions:

 Pascal chooses the result of the mod operation positive, but does not allow d to be negative or
zero (so, a = (a div d ) × d + a mod d is not always valid).[4]

 C99 chooses the remainder with the same sign as the dividend a.[5] (Before C99, the C language
allowed other choices.)

 Perl, Python (only modern versions), and Common Lisp choose the remainder with the same sign
as the divisor d.[6]

 Haskell and Scheme offer two functions, remainder and modulo – PL/I has mod and rem, while
Fortran has mod and modulo; in each case, the former agrees in sign with the dividend, and the
latter with the divisor.

POLYNOMIAL DIVISION
Main article: Euclidean division of polynomials

Euclidean division of polynomials is very similar to Euclidean division of integers and leads to
polynomial remainders. Its existence is based on the following theorem: Given two univariate
polynomials a(x) and b(x) (with b(x) not the zero polynomial) defined over a field (in particular,
the reals or complex numbers), there exist two polynomials q(x) (the quotient) and r(x) (the
remainder) which satisfy:[7]

where

where "deg(...)" denotes the degree of the polynomial (the degree of the constant polynomial
whose value is always 0 is defined to be negative, so that this degree condition will always be
valid when this is the remainder.) Moreover, q(x) and r(x) are uniquely determined by these
relations.

This differs from the Euclidean division of integers in that, for the integers, the degree condition
is replaced by the bounds on the remainder r (non-negative and less than the divisor, which
insures that r is unique.) The similarity of Euclidean division for integers and also for
polynomials leads one to ask for the most general algebraic setting in which Euclidean division
is valid. The rings for which such a theorem exists are called Euclidean domains, but in this
generality uniqueness of the quotient and remainder are not guaranteed.[8]

Polynomial division leads to a result known as the Remainder theorem: If a polynomial f(x) is
divided by x − k, the remainder is the constant r = f(k).[9]

STANDARD REPRESENTATION

Definition
The standard representation of a symmetric group on a finite set of degree is an irreducible
representation of degree (over a field whose characteristic does not divide ) defined in
the following equivalent ways:

1. Take a representation of degree obtained by the usual action of the symmetric group
on the basis set of a vector space. Now, look at the -dimensional subspace of
vectors whose sum of coordinates in the basis is zero. The representation restricts to an
irreducible representation of degree on this subspace. This is the standard
representation.
2. Take a representation of degree obtained by the usual action of the symmetric group
on the basis set of a vector space. Consider the subspace spanned by the sum of the basis
vectors. This is a subrepresentation of degree one. Consider the quotient space by this
subspace. The representation descends naturally to a representation on the quotient space.
This is the standard representation.

Facts
 The standard representation is the representation corresponding to the partition
. We see that the hook-length formula gives us a degree:

which is the same as the degree we expect.

 The matrices for the standard representation (using method (1) or method (2)) can be
written using elements in the set . In fact, using method (2), we can obtain
matrices where every column either has exactly one and everything else a , or has all
s.

Particular cases

Degree of standard
Symmetric Linear representation
Standard representation representation (=
group theory of group
)

nontrivial one-dimensional linear representation


cyclic
2 representation, sending the 1 theory of cyclic
group:Z2
non-identity element to group:Z2

linear representation
symmetric standard representation of
3 2 theory of symmetric
group:S3 symmetric group:S3
group:S3

linear representation
symmetric standard representation of
4 3 theory of symmetric
group:S4 symmetric group:S4
group:S4

linear representation
symmetric standard representation of
5 4 theory of symmetric
group:S5 symmetric group:S5
group:S5

MODULAR OPERATIONS
In mathematics, modular arithmetic is a system of arithmetic for integers, where
numbers "wrap around" when reaching a certain value, called the modulus. The modern
approach to modular arithmetic was developed by Carl Friedrich Gauss in his book
Disquisitiones Arithmeticae, published in 1801.

A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two
12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Usual addition would
suggest that the later time should be 7 + 8 = 15, but this is not the answer because clock time
"wraps around" every 12 hours. Because the hour number starts over after it reaches 12, this is
arithmetic modulo 12. In terms of the definition below, 15 is congruent to 3 modulo 12, so the
(military) time called "15:00" has the equivalent clock form "3:00".

DEFINITION OF CONGRUENCE RELATION


This section is about the (mod n) notation. For the binary operation mod(a,n), see modulo operation.

Modular arithmetic can be handled mathematically by introducing a congruence relation on the


integers that is compatible with the operations on integers: addition, subtraction, and
multiplication. For a positive integer n, two numbers a and b are said to be congruent modulo n,
if their difference a − b is an integer multiple of n (that is, if there is an integer k such that a − b
= kn). This congruence relation is typically considered when a and b are integers, and is denoted:

The parentheses mean that (mod n) applies to the entire equation, not just to the right-hand side
(here b). Sometimes, = is used instead of ≡; in this case, if the parentheses are omitted, this
generally means that "mod" denotes the modulo operation applied to the righthand side, and the
equality implies thus that 0 ≤ a < n.

The number n is called the modulus of the congruence.

The congruence relation may be rewritten as


Explicitly showing its relationship with Euclidean division. However, b need not be the
remainder of the division of a by n. More precisely, what the statement a ≡ b mod n asserts is
that a and b have the same remainder when divided by n. That is,

where 0 ≤ r < n is the common remainder. Subtracting these two expressions, we recover the
previous relation:

by setting k = p − q.

Examples

For example,

because 38 − 14 = 24, which is a multiple of 12, or, equivalently, because both 38 and 14 have
the same remainder 2 when divided by 12.

The same rule holds for negative values:

Because it is common to consider several congruence relations for different moduli at the same
time, the modulus is incorporated in the notation. The congruence relation for a given modulus is
considered to be a binary relation.

Properties
The congruence relation satisfies all the conditions of an equivalence relation:

 Reflexivity: a ≡ a (mod n)
 Symmetry: a ≡ b (mod n) if b ≡ a (mod n) for all a, b, and n.
 Transitivity: If a ≡ b (mod n) and b ≡ c (mod n), then a ≡ c (mod n)

If a1 ≡ b1 (mod n) and a2 ≡ b2 (mod n), or if a ≡ b (mod n), then:

 a + k ≡ b + k (mod n) for any integer k (compatibility with translation)


 k a ≡ k b (mod n) for any integer k (compatibility with scaling)
 a1 + a2 ≡ b1 + b2 (mod n) (compatibility with addition)
 a1 – a2 ≡ b1 – b2 (mod n) (compatibility with subtraction)
 a1 a2 ≡ b1 b2 (mod n) (compatibility with multiplication)
 ak ≡ bk (mod n) for any non-negative integer k (compatibility with exponentiation)
 p(a) ≡ p(b) (mod n), for any polynomial p(x) with integer coefficients (compatibility with
polynomial evaluation)

If a ≡ b (mod n), then it is false, in general, that ka ≡ kb (mod n). However, one has:

 If c ≡ d (mod φ(n)), where φ is Euler's totient function, then ac ≡ ad (mod n) provided a is coprime
with n

For cancellation of common terms, we have the following rules:

 If a + k ≡ b + k (mod n) for any integer k, then a ≡ b (mod n)


 If k a ≡ k b (mod n) and k is coprime with n, then a ≡ b (mod n)

The modular multiplicative inverse is defined by the following rules:

 Existence: there exists an integer denoted a–1 such that aa–1 ≡ 1 (mod n) if and only if a is
coprime with n. This integer a–1 is called a modular multiplicative inverse of a modulo n.
 If a ≡ b (mod n) and a–1 exists, then a–1 ≡ b–1 (mod n) (compatibility with multiplicative inverse,
and, if a = b, uniqueness modulo n)
 If a x ≡ b (mod n) and a is coprime to n, the solution to this linear congruence is given by x ≡ a–1b
(mod n)

The multiplicative inverse x ≡ a–1 (mod n) may be efficiently computed by solving Bézout's

equation for using the Extended Euclidean algorithm. In particular, if p is a prime


number, then a is coprime with p for every a such that 0 < a < p; thus a multiplicative inverse
exists for all a not congruent to zero modulo p.
Some of the more advanced properties of congruence relations are the following:

 Fermat's little theorem: If p is prime and does not divide a, then a p – 1 ≡ 1 (mod p).
 Euler's theorem: If a and n are coprime, then a φ(n) ≡ 1 (mod n), where φ is Euler's totient function
 A simple consequence of Fermat's little theorem is that if p is prime, then a−1 ≡ a p − 2 (mod p) is
the multiplicative inverse of 0 < a < p. More generally, from Euler's theorem, if a and n are
coprime, then a−1 ≡ a φ(n) − 1 (mod n).
 Another simple consequence is that if a ≡ b (mod φ(n)), where φ is Euler's totient function, then
ka ≡ kb (mod n) provided k is coprime with n.
 Wilson's theorem: p is prime if and only if (p − 1)! ≡ −1 (mod p).
 Chinese remainder theorem: For any a, b and coprime m, n, there exists a unique x (mod mn) such
that x ≡ a (mod m) and x ≡ b (mod n). In fact, x ≡ b mn–1 m + a nm–1 n (mod mn) where mn−1 is the
inverse of m modulo n and nm−1 is the inverse of n modulo m.
 Lagrange's theorem: The congruence f (x) ≡ 0 (mod p), where p is prime, and f (x) = a0 xn + ... +
an is a polynomial with integer coefficients such that a0 ≠ 0 (mod p), has at most n roots.
 Primitive root modulo n: A number g is a primitive root modulo n if, for every integer a coprime
to n, there is an integer k such that gk ≡ a (mod n). A primitive root modulo n exists if and only if
n is equal to 2, 4, pk or 2pk, where p is an odd prime number and k is a positive integer. If a
primitive root modulo n exists, then there are exactly φ(φ(n)) such primitive roots, where φ is the
Euler's totient function.
 Quadratic residue: An integer a is a quadratic residue modulo n, if there exists an integer
x such that x2 ≡ a (mod n). Euler's criterion asserts that, if p is an odd prime, and a is not a
multiple of p, then a is a quadratic residue modulo p if and only if

Congruence classes

Like any congruence relation, congruence modulo n is an equivalence relation, and the
equivalence class of the integer a, denoted by an, is the set {… , a − 2n, a − n, a, a + n, a + 2n,
…}. This set, consisting of the integers congruent to a modulo n, is called the congruence class
or residue class or simply residue of the integer a, modulo n. When the modulus n is known
from the context, that residue may also be denoted [a].

Residue systems
Each residue class modulo n may be represented by any one of its members, although we usually
represent each residue class by the smallest nonnegative integer which belongs to that class
(since this is the proper remainder which results from division). Any two members of different
residue classes modulo n are incongruent modulo n. Furthermore, every integer belongs to one
and only one residue class modulo n.[1]

The set of integers {0, 1, 2, …, n − 1} is called the least residue system modulo n. Any set of n
integers, no two of which are congruent modulo n, is called a complete residue system modulo
n.

The least residue system is a complete residue system, and a complete residue system is simply a
set containing precisely one representative of each residue class modulo n.[2] The least residue
system modulo 4 is {0, 1, 2, 3}. Some other complete residue systems modulo 4 are:

 {1, 2, 3, 4}
 {13, 14, 15, 16}
 {−2, −1, 0, 1}
 {−13, 4, 17, 18}
 {−5, 0, 6, 21}
 {27, 32, 37, 42}

Some sets which are not complete residue systems modulo 4 are:

 {−5, 0, 6, 22} since 6 is congruent to 22 modulo 4.


 {5, 15} since a complete residue system modulo 4 must have exactly 4 incongruent
residue classes.

Reduced residue systems

Main article: Reduced residue system

Any set of φ(n) integers that are relatively prime to n and that are mutually incongruent modulo
n, where φ(n) denotes Euler's totient function, is called a reduced residue system modulo n.[3]
The example above, {5,15} is an example of a reduced residue system modulo 4.
Integers modulo n

The set of all congruence classes of the integers for a modulus n is called the ring of integers

modulo n,[4] and is denoted , , or . The notation is, however, not

recommended because it can be confused with the set of n-adic integers. The ring is
fundamental to various branches of mathematics (see Applications below).

The set is defined for n > 0 as:

(When n = 0, does not have zero elements; rather, it is isomorphic to , since a0 = {a}.)

We define addition, subtraction, and multiplication on by the following rules:

 The verification that this is a proper definition uses the properties given before.

In this way, becomes a commutative ring. For example, in the ring , we have

as in the arithmetic for the 24-hour clock.

We use the notation because this is the quotient ring of by the ideal containing all

integers divisible by n, where is the singleton set {0}. Thus is a field when is a
maximal ideal, that is, when n is prime.

This can also be constructed from the group under the addition operation alone. The residue

class an is the group coset of a in the quotient group , a cyclic group.[5]


Rather than excluding the special case n = 0, it is more useful to include (which, as

mentioned before, is isomorphic to the ring of integers), for example, when discussing the
characteristic of a ring.

The ring of integers modulo n is a finite field if and only if n is prime. (this ensures every

nonzero element has a multiplicative inverse). If is a prime power with k > 1, there exists a

unique (up to isomorphism) finite field with n elements, but this is not , which fails to
be a field because it has zero-divisors.

We denote the multiplicative subgroup of the modular integers by . This consists of for
a coprime to n, which are precisely the classes possessing a multiplicative inverse. This forms a

commutative group under multiplication, with order .

Applications

In theoretical mathematics, modular arithmetic is one of the foundations of number theory,


touching on almost every aspect of its study, and it is also used extensively in group theory, ring
theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra,
cryptography, computer science, chemistry and the visual and musical arts.

A very practical application is to calculate checksums within serial number identifiers. For
example, International Standard Book Number (ISBN) uses modulo 11 (if issued before 1
January, 2007) or modulo 10 (if issued on or after 1 January, 2007) arithmetic for error detection.
Likewise, International Bank Account Numbers (IBANs), for example, make use of modulo 97
arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the
CAS registry number (a unique identifying number for each chemical compound) is a check
digit, which is calculated by taking the last digit of the first two parts of the CAS registry number
times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and
computing the sum modulo 10.
In cryptography, modular arithmetic directly underpins public key systems such as RSA and
Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety
of symmetric key algorithms including Advanced Encryption Standard (AES), International Data
Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation.

In computer algebra, modular arithmetic is commonly used to limit the size of integer
coefficients in intermediate calculations and data. It is used in polynomial factorization, a
problem for which all known efficient algorithms use modular arithmetic. It is used by the most
efficient implementations of polynomial greatest common divisor, exact linear algebra and
Gröbner basis algorithms over the integers and the rational numbers.

In computer science, modular arithmetic is often applied in bitwise operations and other
operations involving fixed-width, cyclic data structures. The modulo operation, as implemented
in many programming languages and calculators, is an application of modular arithmetic that is
often used in this context. The logical operator XOR sums 2 bits, modulo 2.

In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal
temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1∶2 or 2∶1
ratio are equivalent, and C-sharp is considered the same as D-flat).

The method of casting out nines offers a quick check of decimal arithmetic computations
performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial
property that 10 ≡ 1 (mod 9).

Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In
particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7
arithmetic.

More generally, modular arithmetic also has application in disciplines such as law (see for
example, apportionment), economics, (see for example, game theory) and other areas of the
social sciences, where proportional division and allocation of resources plays a central part of the
analysis.

Computational complexity
Since modular arithmetic has such a wide range of applications, it is important to know how hard
it is to solve a system of congruences. A linear system of congruences can be solved in
polynomial time with a form of Gaussian elimination, for details see linear congruence theorem.
Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations,
such as multiplication and exponentiation modulo n, to be performed efficiently on large
numbers.

Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard
as integer factorization and thus are a starting point for cryptographic algorithms and encryption.
These problems might be NP-intermediate.

Solving a system of non-linear modular arithmetic equations is NP-complete.[6]

Example implementations

Below are three reasonably fast C functions, two for performing modular multiplication and one
for modular exponentiation on unsigned integers not larger than 63 bits, without overflow of the
transient operations.

An algorithmic way to compute :

uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m)


{
uint64_t d = 0, mp2 = m >> 1;
int i;
if (a >= m) a %= m;
if (b >= m) b %= m;
for (i = 0; i < 64; ++i)
{
d = (d > mp2) ? (d << 1) - m : d << 1;
if (a & 0x8000000000000000ULL)
d += b;
if (d >= m) d -= m;
a <<= 1;
}
return d;
}

On computer architectures where an extended precision format with at least 64 bits of mantissa is
available (such as the long double type of most x86 C compilers), the following routine is faster
than any algorithmic solution, by employing the trick that, by hardware, floating-point
multiplication results in the most significant bits of the product kept, while integer multiplication
results in the least significant bits kept:

uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m)


{
long double x;
uint64_t c;
int64_t r;
if (a >= m) a %= m;
if (b >= m) b %= m;
x = a;
c = x * b / m;
r = (int64_t)(a * b - c * m) % (int64_t)m;
return r < 0 ? r + m : r;
}

Below is a C function for performing modular exponentiation, that uses the mul_mod function
implemented above.

An algorithmic way to compute :

uint64_t pow_mod(uint64_t a, uint64_t b, uint64_t m)


{
uint64_t r = m==1?0:1;
while (b > 0) {
if (b & 1)
r = mul_mod(r, a, m);
b = b >> 1;
a = mul_mod(a, a, m);
}
return r;
}

However, for all above routines to work, m must not exceed 63 bits.
Chapter IV: Applications of Modular Arithmetic
1. Applications of Modular Arithmetic
2. Solving Linear Congruence
3. Arithmetic with Large Integers
4. Congruence of Squares
5. Luhn Formula
6. Mod n Cryptanalysis
1.APPLICATIONS OF MODULAR ARITHMETIC

Modular arithmetic is a special type of arithmetic that involves only integers. This goal of
this article is to explain the basics of modular arithmetic while presenting a progression of more
difficult and more interesting problems that are easily solved using modular arithmetic.

Motivation

Let's use a clock as an example, except let's replace the at the top of the clock with a .

Starting at noon, the hour hand points in order to the following:

This is the way in which we count in modulo 12. When we add to , we arrive back at . The
same is true in any other modulus (modular arithmetic system). In modulo , we count

We can also count backwards in modulo 5. Any time we subtract 1 from 0, we get 4. So, the
integers from to , when written in modulo 5, are
where is the same as in modulo 5. Because all integers can be expressed as , , , , or
in modulo 5, we give these integers their own name: the residue classes modulo 5. In general,
for a natural number that is greater than 1, the modulo residues are the integers that are whole
numbers less than :

This just relates each integer to its remainder from the Division Theorem. While this may not
seem all that useful at first, counting in this way can help us solve an enormous array of number
theory problems much more easily!

Residue

We say that is the modulo- residue of when , and .

Congruence

There is a mathematical way of saying that all of the integers are the same as one of the modulo
5 residues. For instance, we say that 7 and 2 are congruent modulo 5. We write this using the
symbol : In other words, this means in base 5, these integers have the same residue modulo 5:

The (mod 5) part just tells us that we are working with the integers modulo 5. In modulo 5, two
integers are congruent when their difference is a multiple of 5. In general, two integers and are
congruent modulo when is a multiple of . In other words, when

is an integer. Otherwise, , which means that and are not congruent


modulo .
Examples

 because is a multiple of .

 because , which is an integer.

 because , which is not a multiple of .

 because , which is not an integer.

Sample Problem

Find the modulo residue of .

Solution:

Since R , we know that

and is the modulo residue of .

Another Solution:

Since , we know that


We can now solve it easily

and is the modulo residue of

Making Computation Easier

We don't always need to perform tedious computations to discover solutions to interesting


problems. If all we need to know about are remainders when integers are divided by , then we
can work directly with those remainders in modulo . This can be more easily understood with a
few examples.

Addition

Problem

Suppose we want to find the units digit of the following sum:

We could find their sum, which is , and note that the units digit is . However, we could
find the units digit with far less calculation.

Solution

We can simply add the units digits of the addends:

The units digit of this sum is , which must be the same as the units digit of the four-digit sum
we computed earlier.
Why we only need to use remainders

We can rewrite each of the integers in terms of multiples of and remainders:

.
When we add all four integers, we get

At this point, we already see the units digits grouped apart and added to a multiple of (which
will not affect the units digit of the sum):

Solution using modular arithmetic

Now let's look back at this solution, using modular arithmetic from the start. Note that

Because we only need the modulo residue of the sum, we add just the residues of the
summands:

so the units digit of the sum is just .


Addition rule

In general, when , and are integers and is a positive integer such that

the following is always true:

And as we did in the problem above, we can apply more pairs of equivalent integers to both
sides, just repeating this simple principle.

Proof of the addition rule

Let , and where and are integers. Adding the two equations we

get:

Which is equivalent to saying

Subtraction

The same shortcut that works with addition of remainders works also with subtraction.

Problem

Find the remainder when the difference between and is divided by .

Solution

Note that and . So,

Thus,
so 1 is the remainder when the difference is divided by . (Perform the subtraction yourself,
divide by , and see!)

Subtraction rule

When , and are integers and is a positive integer such that

the following is always true:

Multiplication

Modular arithmetic provides an even larger advantage when multiplying than when adding or
subtracting. Let's take a look at a problem that demonstrates the point.

Problem

Jerry has boxes of soda in his truck. The cans of soda in each box are packed oddly so that
there are cans of soda in each box. Jerry plans to pack the sodas into cases of cans to sell.
After making as many complete cases as possible, how many sodas will Jerry have leftover?

Solution

First, we note that this word problem is asking us to find the remainder when the product
is divided by .

Now, we can write each and in terms of multiples of and remainders:

This gives us a nice way to view their product:

Using FOIL, we get that this equals

We can already see that each part of the product is a multiple of , except the product of the
remainders when each and are divided by 12. That part of the product is ,
which leaves a remainder of when divided by . So, Jerry has sodas leftover after making as
many cases of as possible.

Solution using modular arithmetic

First, we note that

Thus,

meaning there are sodas leftover. Yeah, that was much easier.

Multiplication rule

When , and are integers and is a positive integer such that

The following is always true:

Exponentiation

Since exponentiation is just repeated multiplication, it makes sense that modular arithmetic
would make many problems involving exponents easier. In fact, the advantage in computation is
even larger and we explore it a great deal more in the intermediate modular arithmetic article.

Note to everybody: Exponentiation is very useful as in the following problem:

Problem #1

What is the last digit of if there are 1000 7s as exponents and only one 7 in the
middle?

We can solve this problem using mods. This can also be stated as . After that, we see that 7
is congruent to -1 in mod 4, so we can use this fact to replace the 7s with -1s, because 7 has a
pattern of repetitive period 4 for the units digit. is simply 1, so therefore , which
really is the last digit.
Problem #2

What are the tens and units digits of ?

We could (in theory) solve this problem by trying to compute , but this would be extremely
time-consuming. Moreover, it would give us much more information than we need. Since we
want only the tens and units digits of the number in question, it suffices to find the remainder
when the number is divided by . In other words, all of the information we need can be found
using arithmetic mod .

We begin by writing down the first few powers of mod :

A pattern emerges! We see that (mod ). So for any positive integer , we


have (mod ). In particular, we can write

(mod ).

By the "multiplication" property above, then, it follows that

(mod ).

Therefore, by the definition of congruence, differs from by a multiple of . Since both


integers are positive, this means that they share the same tens and units digits. Those digits are
and , respectively.

Problem #3

Can you find a number that is both a multiple of but not a multiple of and a perfect square?

No, you cannot. Rewriting the question, we see that it asks us to find an integer that satisfies
.

Taking mod on both sides, we find that . Now, all we are missing is proof
that no matter what is, will never be a multiple of plus , so we work with cases:

This assures us that it is impossible to find such a number.


Summary of Useful Facts

Consider four integers and a positive integer such that and


. In modular arithmetic, the following identities hold:

 Addition: .
 Subtraction: .
 Multiplication: .

 Division: , where is a positive integer that divides and .


 Exponentiation: where is a positive integer.

Applications of Modular Arithmetic

Modular arithmetic is an extremely flexible problem solving tool. The following topics are just a
few applications and extensions of its use:

 Divisibility rules
 Linear congruences

A linear congruence is the problem of finding an integer x satisfying

ax ≡ b (mod m)

for specified integers a, b, and m. This problem could be restated as finding x such that

1. the remainder when ax is divided by m is b,


2. a*x % m == b in C code,
3. ax – b is divisible by m, or
4. in base m, ax ends in b.

Two solutions are considered the same if they differ by a multiple of m. (It’s easy to see that x is
a solution if and only if x + km is a solution for all integers k.)

For example, we may want to solve 7x ≡ 3 (mod 10). It turns out x = 9 will do, and in fact that is
the only solution. However, linear congruences don’t always have a unique solution. For
example, 8x ≡ 3 (mod 10) has no solution; 8x is always an even integer and so it can never end in
3 in base 10. For another example, 8x ≡ 2 (mod 10) has two solutions, x = 4 and x = 9.

In this post, we answer two questions.

1. How many solutions does ax ≡ b (mod m) have?


2. How can we compute them?
The answer to the first question depends on the greatest common divisor of a and m. Let g =
gcd(a, m). If b is not divisible by g, there are no solutions. If b is divisible by g, there are g
solutions.

So if g does divide b and there are solutions, how do we find them? The brute force solution
would be to try each of the numbers 0, 1, 2, …, m-1 and keep track of the ones that work. That
works in theory, but it is impractical for large m. Cryptography applications, for example, require
solving congruences where m is extremely large and brute force solutions are impossible.

First, suppose a and m are relatively prime. That is, assume g = gcd(a, m) = 1. We first put the
congruence ax ≡ b (mod m) in a standard form. We assume a > 0. If not, replace ax ≡ b (mod m)
with –ax ≡ –b (mod m). Also, we assume a < m. If not, subtract multiples of m from a until a <
m.

Now solve my ≡ –b (mod a). This is progress because this new problem is solving a congruence
with a smaller modulus since a < m. If y solves this new congruence, then x = (my + b)/a solves
the original congruence. We can repeat this process recursively until we get to a congruence that
is trivial to solve. The algorithm can be formalized into a procedure suitable for programming.
The result is closely related to the Euclidean algorithm.

Now what if the numbers a and m are not relatively prime? Then first solve the congruence
(a/g)y ≡ (b/g) (mod (m/g)) using the algorithm above. Then the solutions to ax ≡ b (mod m) are x
= y + tm/g where t = 0, 1, 2, …, g-1.

Here are a couple examples.

First, let’s solve 7x ≡ 13 (mod 100). Since 7 and 100 are relatively prime, there is a unique
solution. The algorithm says we should solve 100y ≡ -13(mod 7). Since 100 ≡ 2 (mod 7) and -13
≡ 1 (mod 7), this problem reduces to solving 2y ≡ 1 (mod 7), which is small enough to solve by
simply sticking in numbers. We find y = 4. Then x = (100*4 + 13)/7 = 59. You can verify that
7*59 = 413 so 7*59 ≡ 13 (mod 100). In general, we may have to apply the algorithm multiple
times until we get down to a problem small enough to solve easily.

Now let’s find all solutions to 50x ≡ 65 (mod 105). Since gcd(50, 105) = 5 and 65 is divisible by
5, there are 5 solutions. So we first solve 10x ≡ 13 (mod 21). The algorithm above says we can
solve this by first solving 21y ≡ -13 (mod 10), which reduces immediately to y ≡ 7 (mod 10), and
so we take y = 7. This says we can take x = (105*7 + 65)/50 = 16. The complete set of solutions
to our original congruence can be found by adding multiples of 105/5 = 21. So the solutions are
16, 37, 58, 79, and 100.
I intend to write posts in the future about how to solve simultaneous systems of linear
congruences and how to solve quadratic congruences.

Update: Here are the posts I intended to write: systems of congruences, quadratic congruences.
2. SOLVING LINEAR CONGRUENCE

How can we solve linear congruences? A tool that comes handy when it comes to solving
congruences is a multiplicative inverse. A multiplicative inverse for a in modulo n exists if and
only if gcd(a,n)=1. A multiplicative inverse for a is denoted as a−1 where a⋅a−1≡1(modn).

If the numbers are small enough, it will usually be easy to find the multiplicative inverse by
guessing and checking. However, there is a way where you will find it for sure, but may take a
bit longer. So now let's go through a problem.

Solve 5x+2≡4(mod7) for x.

Solution: Like what we would do for linear equations with integers, subtract 2 from each side,
resulting in 5x≡2(mod7). If you realize the multiplicative inverse of 5 modulo 7 is 3, because
5⋅3≡1(mod7), then we can multiply each side by 3 resulting in (5⋅3)x≡2⋅3(mod7) ⟹x≡6(mod7)

However, if we didn't realize that the multiplicative inverse of 5 is 3, then we know that we are
looking for some integers m and n such that 5n+7m=1. Recall that we said that gcd(5,7)=1 was a
requirement for the equation to be solvable. This agrees with Bezout's Lemma, which says that
this equation has an integer solution.

Now let's find that solution. By the division algorithm, we know


7=5(1)+2⟹2=7(1)−5(1)5=2(2)+1⟹1=5(1)−2(2)2=1(2)+0 Substituting the first equation into
the second and expanding, we have 1=5(1)−2(7(1)−5(1))=5(3)−7(2) So, we know that
5⋅3≡1(mod7). Then, we proceed in the same way. Note that this is just a small example, so it is
easily guessible. When the numbers get larger, this will be more practical.

Most linear congruences can be solved this way, however, the congruences in the form
ax+b≡c(modn) where gcd(a,n) is not equal to 1 cannot be solved using multiplicative inverses,
since there is no inverse when gcd(a,n) is not equal to 1, as shown in the solution
3. ARITHMETIC WITH LARGE INTEGERS
Now that calculators ore readily available they ore sometimes used in classrooms to
explore patterns in sequences of numbers. lt has been pointed out that some patterns such os 1 1
X 1 1 = 1 21 1 1 1 X 1 1 1 = 12321 1 1 1 1 * 1 1 1 1 = 1 234321 ... only really become
interesting when the numbers get too big for a calculator to handle accurately. Computers ore
usually no better but, in the article that follows, David Tall explains that he has the answer.
Arithmetic with large numbers Modem microcomputers do not usually cope well with the
arithmetic of large integers; instead they store and display numbers to an accuracy of a few
digits, with inevitable rounding errors. This is a consequence of the way computer languages
represent and process numbers, rather than an inherent weakness of the computer. By using a
different representation it is possible to obtain accuracy to any specified degree, subject only to
the limitations of available memory. In this article I shall show how BBC BASIC may be
extended to calculate exact sums, differences, products, quotients, remainders and powers for
whole numbers in the range±. 10 254 and shall demonstrate how this facility can be used to
factorize large numbers. Normally BBC BASIC can only cope with integers in the range ±(231-
1). The command A%=2j31-1 gives an integer variable its maximum value whilst A%=2 j 31
produces the error message 'Too big'. The technical reason for this is that integers are stored
internally in four memory locations, each of ' which can hold an eight-digit binary number. The
number of binary digits available is therefore thirty-48 two. One of them is used to represent the
sign. Thus the largest integer that can be represented is 231 - 1. This can give unsatisfactory
results. Even in the accepted integer range the numbers are not printed accurately in decimal
notation. A%=2 j 31-1: PRINT A% gives the expression 2.14748365E9 instead of the exact
answer 2,147,483,647. A%=2 i 31-1 :PRINT 2*A% : PRINT A%+A% gives two radically
different answers: 2* A %=4.29496729E9 and A%+A%= -2. The difference arises from an error
in the BASIC interpreter. Integer arithmetic is used wherever possible, and the interpreter
switches to floating-point routines when integers become inappropriate. The computation 2* A%
clearly exceeds the maximum integer size and is performed in floating-point arithmetic. But
A%+ A% is erroneously carried out 'as an integer calculation, with the sum of two thirty-
one-digit binary numbers producing an error in the thirty-second place. This sets the minus sign,
but fails to Autumn 1985 MICRO Mm’B.
Methods have been devised independently by Trevor Fletcher and Joe Watson. Joe's package is
available on disk from Keele Department of Education for £5 . The central idea is to hold the
number in memory as a string, giving a: maximum length of 255 digits. To perform the
arithmetic the strings of digits are �

roken into manageable chunks (four digits at a time, say) which are turned into numbers so that
they can be operated on using ordinary BASIC arithmetic. The results are then reassembled to
give the answer as another long string of digits. Addition and subtraction are fast; multiplication
and division are a little slower. The methods are ideal for the demonstration of what can be
done in BASIC and are quite satisfactory for normal arithmetic. But time becomes a problem
when a large number of operations is required. In such circumstances a method of speeding up
the calculations becomes highly desirable. Faster arithmetic is made possible by writing the
routines in machine code. The '6502 processor' that drives the BBC Microcomputer includes
routines for decimal arithmetic which are totally ignored by BBC BASIC. The program given on
the disk available in connection with this issue of MICROMATH uses these neglected resources
to provide arithmetic. for large numbers. ·I shall illustrate some of the operations that can be
carried out with large numbers.
4. CONGRUENCE OF SQUARES
In number theory, a congruence of squares is a congruence commonly used in integer
factorization algorithms.

Derivation

Given a positive integer n, Fermat's factorization method relies on finding numbers x, y


satisfying the equality

x 2 − y 2 = n {\displaystyle x^{2}-y^{2}=n\,\!} x^2 - y^2 = n\,\!

We can then factor n = x2 - y2 = (x + y)(x - y). This algorithm is slow in practice because we
need to search many such numbers, and only a few satisfy the strict equation. However, n may
also be factored if we can satisfy the weaker congruence of squares condition:

x 2 ≡ y 2 ( mod n ) {\displaystyle x^{2}\equiv y^{2}{\pmod {n}}} x^2 \equiv y^2 \pmod{n}

x ≢ ± y ( mod n ) {\displaystyle x\not \equiv \pm y{\pmod {n}}} {\displaystyle x\not \equiv
\pm y{\pmod {n}}}

From here we easily deduce

x 2 − y 2 ≡ 0 ( mod n ) {\displaystyle x^{2}-y^{2}\equiv 0{\pmod {n}}} {\displaystyle


x^{2}-y^{2}\equiv 0{\pmod {n}}}

( x + y ) ( x − y ) ≡ 0 ( mod n ) {\displaystyle (x+y)(x-y)\equiv 0{\pmod {n}}} {\displaystyle


(x+y)(x-y)\equiv 0{\pmod {n}}}

This means that n divides the product (x + y) (x - y). Thus (x + y) and (x − y) each contain
factors of n, but those factors can be trivial. In this case we need to find another x and y.
Computing the greatest common divisors of (x + y, n) and of (x - y, n) will give us these factors;
this can be done quickly using the Euclidean algorithm.

Congruences of squares are extremely useful in integer factorization algorithms and are
extensively used in, for example, the quadratic sieve, general number field sieve, continued
fraction factorization, and Dixon's factorization. Conversely, because finding square roots
modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring
that number, any integer factorization algorithm can be used efficiently to identify a congruence
of squares.

Further generalizations

It is also possible to use factor bases to help find congruences of squares more quickly. Instead of
looking for x 2 ≡ y 2 ( mod n ) {\displaystyle \textstyle x^{2}\equiv y^{2}{\pmod {n}}}
\textstyle x^2 \equiv y^2 \pmod{n} from the outset, we find many x 2 ≡ y ( mod n )
{\displaystyle \textstyle x^{2}\equiv y{\pmod {n}}} \textstyle x^2 \equiv y \pmod{n} where the
y have small prime factors, and try to multiply a few of these together to get a square on the
right-hand side.

Examples

Factorize 35

We take n = 35 and find that

6 2 = 36 ≡ 1 ≡ 1 2 ( mod 35 ) {\displaystyle \textstyle 6^{2}=36\equiv 1\equiv 1^{2}{\pmod


{35}}} {\displaystyle \textstyle 6^{2}=36\equiv 1\equiv 1^{2}{\pmod {35}}}.

We thus factor as

gcd ( 6 − 1 , 35 ) ⋅ gcd ( 6 + 1 , 35 ) = 5 ⋅ 7 = 35 {\displaystyle \gcd(6-1,35)\cdot


\gcd(6+1,35)=5\cdot 7=35} {\displaystyle \gcd(6-1,35)\cdot \gcd(6+1,35)=5\cdot 7=35}

Factorize 1649

Using n = 1649, as an example of finding a congruence of squares built up from the products of
non-squares (see Dixon's factorization method), first we obtain several congruences

41 2 ≡ 32 ( mod 1649 ) {\displaystyle 41^{2}\equiv 32{\pmod {1649}}} {\displaystyle


41^{2}\equiv 32{\pmod {1649}}}

42 2 ≡ 115 ( mod 1649 ) {\displaystyle 42^{2}\equiv 115{\pmod {1649}}} {\displaystyle


42^{2}\equiv 115{\pmod {1649}}}
43 2 ≡ 200 ( mod 1649 ) {\displaystyle 43^{2}\equiv 200{\pmod {1649}}} {\displaystyle
43^{2}\equiv 200{\pmod {1649}}}

of these, two have only small primes as factors

32 = 2 5 {\displaystyle 32=2^{5}} {\displaystyle 32=2^{5}}

200 = 2 3 ⋅ 5 2 {\displaystyle 200=2^{3}\cdot 5^{2}} {\displaystyle 200=2^{3}\cdot 5^{2}}

and a combination of these has an even power of each small prime, and is therefore a square

32 ⋅ 200 = 2 5 + 3 ⋅ 5 2 = 2 8 ⋅ 5 2 = ( 2 4 ⋅ 5 ) 2 = 80 2 {\displaystyle 32\cdot


200=2^{5+3}\cdot 5^{2}=2^{8}\cdot 5^{2}=(2^{4}\cdot 5)^{2}=80^{2}} {\displaystyle
32\cdot 200=2^{5+3}\cdot 5^{2}=2^{8}\cdot 5^{2}=(2^{4}\cdot 5)^{2}=80^{2}}

yielding the congruence of squares

32 ⋅ 200 = 80 2 ≡ 41 2 ⋅ 43 2 ≡ 114 2 ( mod 1649 ) {\displaystyle 32\cdot 200=80^{2}\equiv


41^{2}\cdot 43^{2}\equiv 114^{2}{\pmod {1649}}} {\displaystyle 32\cdot 200=80^{2}\equiv
41^{2}\cdot 43^{2}\equiv 114^{2}{\pmod {1649}}}

So using the values of 80 and 114 as our x and y gives factors

gcd ( 114 − 80 , 1649 ) ⋅ gcd ( 114 + 80 , 1649 ) = 17 ⋅ 97 = 1649. {\displaystyle \gcd(114-


80,1649)\cdot \gcd(114+80,1649)=17\cdot 97=1649.} {\displaystyle \gcd(114-80,1649)\cdot
\gcd(114+80,1649)=17\cdot 97=1649.}
LUHN ALGORITHM
The Luhn algorithm, also known as the modulus 10 or mod 10 algorithm, is a simple
checksum formula used to validate a variety of identification numbers, such as credit card
numbers, IMEI numbers, Canadian Social Insurance Numbers. The LUHN formula was created
in the late 1960s by a group of mathematicians. Shortly thereafter, credit card companies adopted
it. Because the algorithm is in the public domain, it can be used by anyone. Most credit cards and
many government identification numbers use the algorithm as a simple method of distinguishing
valid numbers from mistyped or otherwise incorrect numbers. It was designed to protect against
accidental errors, not malicious attacks.

Steps involved in Luhn algorithm

Let’s understand the algorithm with an example:

Consider the example of an account number “79927398713“.

Step 1 – Starting from the rightmost digit double the value of every second digit,

Step 2 – If doubling of a number results in a two digits number i.e greater than 9(e.g., 6 × 2 =
12), then add the digits of the product (e.g., 12: 1 + 2 = 3, 15: 1 + 5 = 6), to get a single digit
number.
MOD N CRYPTANALYSIS:
In cryptography, mod n cryptanalysis is an attack applicable to block and stream
ciphers. It is a form of partitioning cryptanalysis that exploits unevenness in how the cipher
operates over equivalence classes (congruence classes) modulo n. The method was first
suggested in 1999 by John Kelsey, Bruce Schneier, and David Wagner and applied to RC5P (a
variant of RC5) and M6 (a family of block ciphers used in the FireWire standard). These attacks
used the properties of binary addition and bit rotation modulo a Fermat prime.

Mod 3 analysis of RC5P

For RC5P, analysis was conducted modulo 3. It was observed that the operations in the cipher
(rotation and addition, both on 32-bit words) were somewhat biased over congruence classes
mod 3. To illustrate the approach, consider left rotation by a single bit:

X ⋘ 1 = { 2 X , if X < 2 31 2 X + 1 − 2 32 , if X ≥ 2 31 {\displaystyle X\lll


1=\left\{{\begin{matrix}2X,&{\mbox{if }}X<2^{31}\\2X+1-2^{32},&{\mbox{if }}X\geq
2^{31}\end{matrix}}\right.} X\lll 1=\left\{{\begin{matrix}2X,&{\mbox{if
}}X<2^{{31}}\\2X+1-2^{{32}},&{\mbox{if }}X\geq 2^{{31}}\end{matrix}}\right.

Then, because
2 32 ≡ 1 ( mod 3 ) , {\displaystyle 2^{32}\equiv 1{\pmod {3}},\,} 2^{{32}}\equiv 1{\pmod
3},\,
it follows that
X ⋘ 1 ≡ 2 X ( mod 3 ) . {\displaystyle X\lll 1\equiv 2X{\pmod {3}}.} X\lll 1\equiv
2X{\pmod 3}.
Thus left rotation by a single bit has a simple description modulo 3. Analysis of other operations
(data dependent rotation and modular addition) reveals similar, notable biases. Although there
are some theoretical problems analysing the operations in combination, the bias can be detected
experimentally for the entire cipher. In (Kelsey et al., 1999), experiments were conducted up to
seven rounds, and based on this they conjecture that as many as 19 or 20 rounds of RC5P can be
distinguished from random using this attack. There is also a corresponding method for
recovering the secret key.

Potrebbero piacerti anche