Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Thoughts on Quantum Artificial Intelligence / Q Supremacy

[D] Thoughts on Quantum Artificial Intelligence / Q Supremacy

“Quantum Computing: The Why and How ǀ Jonathan Baker, UChicago” https://www.youtube.com/watch?v=5kTiB_KDUj0

Hey 🙂 So just wanted to start a discussion on what people think about whether Quantum Algorithms will “revolutinize” machine learning algorithms. I’m not a quantum expert, so take my stance with a grain of salt.

I was watching many videos (“Quantum algorithm for solving linear equations” https://www.youtube.com/watch?v=KtIPAPyaPOg, “Seth Lloyd: Quantum Machine Learning” https://www.youtube.com/watch?v=wkBPp9UovVU etc etc) + reading Wikipedia blah. Then I came across the diagram above.

According to Jonathan Baker, there are 3 main future trends for QPCs. I just extrapolated his graphs. The green line is most optimistic, utilising “co-design”??? which I don’t know what that means. The red is less steep, and the blue is just a straight line continuation of the current # of qubit trend. (Notice the log10 scale)

QAOA or Quantum Approximation Optimization Algorithms include Quantum Linear Regression, and possibly??? (I don’t know) optimisation methods for backprop. The green line shows by 2025 QPCs can be used for Linear Reg. Red which is like average case is 2035? and worst case is 2045.

To crack cryptography (ie Shor’s Algorithm), over 10,000 # of qubits are needed. By green best case, that will be at 2032. Average is 2045 and worst is 2067.

I was also reading Wikipedia “Quantum algorithm for linear systems of equations” https://en.wikipedia.org/wiki/Quantum_algorithm_for_linear_systems_of_equations, and it highlights how solving X * beta = y or A * x = b takes O( log(P) * K^2 ), where K is the condition number and P is the # of coefficients in beta. The best conjugate gradeint method takes O( P * K ).

More concretely, the “exponential speedup” (I think???) applies to sparse matrices. If you include error bounds, then you get O( log(P) * K^2 / err ). For dense matrices you get O( sqrt(P) log(P) K^2 ).

The issue I see is since methods all include error bounds, it isn’t necessarily a good way to compare direct methods with quantum algos. A better way is to compare randomized methods, where an “exponential speedup” is also possible by sketching only log(N) rows. It’s possible to also say apply the randomized methods with Q Algos, hence in total you might get a staggering “exponential-exponential” speedup, but because Q Algos inherently have error, this will exaggerate the error a lot.

So what do people think about the potential of Q Algos for ML ?

PS: The graph above is suprisingly a log10 plot (ie x10). This is clearly different from Moore’s Law graph (x2 for # of transistors), but anyways I’m guessing Qubits don’t follow Moores Law

submitted by /u/danielhanchen
[link] [comments]