Tip:
Highlight text to annotate it
X
This gets back to the fundamental question we have of how hard is factoring.
We saw that it was possible to factor 129 decimal digits in 1994.
The computing resources to do this were about 1600 machines
collaborating on the internet.
Certainly that's much less than many people have access to today.
There are larger numbers that have been broken since.
The largest challenge that's been broken so far was RSA-768, which was a challenge.
Those are bits instead of decimal digits.
That's equivalent to 232 decimal digits.
That was done in 2009 using about 2000 years of computational power.
If we want to know that RSA is secure, we need to understand
how the cost of factoring depends on the size of the numbers that we need to factor.
We'd like to know that if we pick a large enough key,
even an adversary with a large amount of computational power--
and as the years go on adversaries will have more and more power--
still won't be able to factor the number and break the RSA.
We don't know any way to actually prove a problem like this is fundamentally hard.
The best we can do is look at the best-known algorithms
and have some confidence because people worked very hard to find better algorithms--
and progress has been slow--that, barring some very unexpected breakthrough,
it won't bet much faster.
This is quite unsatisfying, but it's the best we can do.
To talk about this, I want to measure the size of the input.
That's the number of bits--we'll use "b" for that--in the modulus.
We could try a brute force approach, which would go through all the numbers
from 2 up to the square root of n, checking whether each one is a factor.
If it finds a factor, then it returns that.
Let's be optimistic and assume, unrealistically, that we could do this factoring test
which could be done by finding the greatest common divisor of the two numbers in constant time.
Then we're going to need to go through this loop square root of n times,
which means our running time will be linear in the square root of n,
but b is the log of n. Our running time will be exponential in b/2.
This clearly is not going to work for large b.
But this is not the best-known factoring algorithm.
The fastest known factoring algorithm as of 2012 when I'm recording this
is what's known as the General Number Field Sieve.
This is a bit faster than the brute force, but it's still essentially requires trying all possibilities.
Its running time is exponential in b^1/3 log b^2/3,
which is still much worse than being polynomial.
One important caveat--this is the best known factoring algorithm assuming a classical computer.
If you have a large quantum computer, which no one does yet,
there's a faster algorithm, which is known as Shor's algorithm,
which was created by Peter Shor in 1994.
That actually has a running time that's polynomial in the number of bits.
This development, which we won't go into more depth in this class,
is both why there's a tremendous amount of practical interest
as well as theoretical interest in quantum computing.
It seems to allow for some problems that seem to be hard a way to compute them more efficiently.
This means that factoring is in a complexity class called
"bounded error quantum polynomial time"--known as BQP.
What's unknown is whether factoring is in the class NP-Hard,
which are the hardest problems that can be solved by a nondeterministic turing machine
in polynomial time.