3 comments:

  1. You'll like this take on the current state of "AI" as well: https://mathematicswithoutapologies.wordpress.com/2021/07/21/announcing-a-new-newsletter-on-mechanizing-mathematics/

  2. QC is not truly nascent. It has been around for 30 yrs with basically nothing to show for all the billions wasted. It’s one of those areas where the sizzle is so delectable that one forgets to notice there’s no steak. Victor rightly called it out as a Ponzi scheme, but he didn’t go into the technical details why it is so, so people might think it’s still possible at some stage. Not so.

    QC uses Qubit to perform calculation. And to make meaningful calculation, the number of Qubit (logical Qubit that is), must be in the realm of 1000, at least, if not way more. As an example: according to the Shor algorithm, the number of Data Qubit needed alone, to factor a N-bit number (done the right way, no parlor trick), is 72*N^3. For a 3 bit number, this already requires 1944 of Data Qubit (logical Qubit). The “best” QC right now has what, 53 Qubit from Google. Are these logical or just physical Qubits? If they’re physical Qubits, even assuming they work perfectly (they don’t due to serious issues), and the logical Qubit consists of only 1 Qubit, they can’t even factor a 1 bit number. If these 53 Qubits need to be combined into logical Qubit, the result will be even less impressive.

    After Google announcement, The Chinese also rushed ahead to claim Quantum Supremacy with their Photonic QC (Optical QC using Qubits made out of photons, unlike Google QC which was made out of superconducting Josephson Junctions):

    https://www.hpcwire.com/2020/12/16/researchers-from-china-demonstrate-quantum-supremacy-using-an-optical-method/

    Here’s their research paper:

    https://arxiv.org/ftp/arxiv/papers/2012/2012.01625.pdf

    Reading through it and one can spot serious scalability problems. Among others, here’s their own admission:

    “Although there were small-scale demonstrations of GBS with up to five photons, implementing a large-scale GBS faced a number of experimental challenges. First, it requires a large number of SMSS sources with sufficiently high squeezing parameters, photon indistinguishability, and collection efficiency, simultaneously (46). Second, all the photons are interfered inside a sufficiently large and deep interferometer with full connectivity, matrix randomness, near-perfect wave-packet overlap and phase stability, and near-unity transmission rate, simultaneously…”

    In a word, precision problems come from using coarse-grained, imperfect, classical devices to manipulate and measure quantum-scale particles/waves. This has been a 20 yr problem with no visible progress. Of course proponents will claim steady progress, except in the grand scheme of scalable quantum computing, all those progresses are so tiny they effectively amount to zero.

    And this, the worst disease of QC:

    “The whole optical set-up is phase-locked to maintain a high coherence between the superposition of all photon number states.”

    QC generally works on the principle of entangled Qubits. It’s precisely the principle of entanglement, or superposition applied to a large number of amplitudes simultaneously, that makes QC potentially very powerful. But every quantum system suffers from decoherence, which collapses all the continuum of states (amplitudes) into one on measurement. Decoherence time also happens to get progressively worse with an increasing number of Qubits (N), not to mention the quantum computer, which is a classical device itself, can act as another decoherence source proportional to its size (which in turn increases proportionally with N) -- esp. in the case of photonic QC. Photonic QC also suffers from physical alignment issue when the number of optical components grow exponentially in response to the increase in Qubit.

    QC works by performing unitary transformation (quantum gate) on the Qubits, and for it to work correctly, the calculation has to be reversible, i.e. a QC system can only linearly evolve. But all realistic systems are nonlinear, because no system is perfectly isolated. And this unnecessary coupling with the environment introduces additional, uncontrollable, degrees of freedom into phase space of the quantum states, transitioning the whole (QC + surrounding) into a complex system at the edge of chaos, which basically results in irreversible evolution, violating a working rule of QC, destroying results, and necessitating error correction process. But due to the entangled nature of the Qubits, all errors in real life are correlated, which renders error correction scheme using the Threshold theorem (which assumes uncorrelated errors among other things) useless. This gets exponentially worse with high N, as the number of elements that ALL require high-precision control is 2^N in a QC, and these 2^N elements are all correlated.

    For a 1000-Qubit QC, this is 2^1000 continuous variables (continuous because each of the Qubit is basically a continuous wave function described by 2 complex (as in complex number) amplitudes). Can anyone hope to control 2^1000 continuous analog waves to a high precision required for QC to work!? What about system with 10k, 100k, Qubits?? Given that all errors are correlated in reality due to entanglement and due to limited spacing between components inside a quantum computer, even 1 erroneous qubit will have massive impact in the whole system, and one can easily see how, as the system gets bigger (bigger N), the error problem gets exponentially worse, basically unsolvable. We just can’t satisfy Laplace’s demon as there’re many unknown variables. Let’s face it: we can’t even make a classical device like a router or a computer work without hiccups’ we’ve been trying to build classical, all-optical routers for 30 years to no success, despite much better understanding the science of light vs quantum mechanics. How can we begin to hope we can build any working thing on a quantum scale (which is exponentially more difficult than any classical system) anytime soon?

    One other thing: as more and more operations get performed on these N Qubits, each of these operations introducing more skew/perturbation, the whole system just descends deeper and deeper into chaos, and classical computers can't model chaos due to the limited amount of state they have, so they can't model a working QC. To build large systems these days, we need to model/simulate them. And QC requires high-precision modeling. That alone is an insurmountable engineering challenge; it's fundamental.

    These are just some examples in a long list of insurmountable technical issues, that QC proponents have to overcome if they want to build a working, scalable QC. And no, AFAIK, no one in the world knows how to deal with these problems at scale, the first and foremost reason being no one understands why Quantum Mechanics works the way it does. Yes we have mathematical representations, and yes, we can use things like wave mechanics to try and explain stuff like the Uncertainty Principle, but at best, it’s just an educated guess, some kind of back-fitting narrative. How much of it is a true description of the reality of atomic and subatomic matters, we don’t know, as we have no visibility into them. In a word, we don’t know enough about matter, scientifically, so how can we build a QC on this empty foundation?

    And that, is the central issue with QC. There’s no known first step toward achieving a working QC. How can there be one, when the science is still in the dark – we don’t know anything more about QM today then we did back in the 1930s, we only have more interpretations of it. Isn’t the reasonable thing to do is try advancing our understanding of QM science first – which is a monumental task in itself, before thinking about turning it into a usable technology? But do we see it being done these days, at Big Techs, in China etc, or are they just handwaving with rubbish corporate PR release, because the real problem is just too hard for their software-eating-the-world mindset and hurts their fragile ego?

    As it stands now, QC can hardly even be called technology. It’s at best, a POC demo. Think about the insanity of trying to build something when we have next to zero understanding of the underlying process. As I said, we can predict the phenomena, but we don’t know WHY they are what they are, and it’s the WHY here that pretty much determines the scalability of a working QC. This QC field, if it can be called as such, has no ground in reality, and is filled with frauds who just want to jump on the bandwagon to take advantage of the big easy money, a Ponzi scheme indeed. These frauds try to hide their insurmountable problems behind a wall of fancy maths, that confuse the hell out of people, to avoid being asked the hard questions. But indeed QC is so far removed from our current capability that effectively it’s just a science fiction, like AI. Worse, this kind of voodoo and witchcraft inflicts opportunity costs, as it misleads otherwise smart people to chase after and worship technological mirage, instead of taking on problems worth solving in the moment.

    QC, like nuclear fusion and AI, due to their plaguing problems, are just ridiculous white elephants, so they don’t justify the kind of money being poured into them. They won’t come to be in a 100 years, or even a 1000 yrs for that matter. Not happening. Just look at the history of fusion and one can have a rough idea of how long it takes for certain things to come, if ever. And we have way more understanding of fusion science than of QC. Heck, even the fission breeder reactor technology, has been an utter failure in terms of effectiveness, despite fission science and technology being much more well understood and proven than fusion. We’ve been living with an illusion of progress for a long time now.

    Rodney Brooks also expressed skepticism toward QC in his post about Moore’s Law here:

    https://rodneybrooks.com/the-end-of-moores-law/

    He referred to Scott Aaronson as a QC expert. Scott himself did write an article expressing his view and belief on the eventual realization of QC here:

    https://www.scottaaronson.com/blog/?p=124

    Reading it doesn’t inspire much (if any) confidence though, as 5 and 6 are exactly the engineering problems I mentioned above; they’re not gonna get solved for the next 1000 years. 9 seems to be a consequence of Extended Church-Turing thesis. But we had working computers before theorists like Turing and Von Neuman formed their useless theories, so not too sure of their importance and relevance, even now with the benefit of hindsight. 12 is the most popular argument, and it's esp. troublesome, because it's exactly the science that's missing here. That's why it's been 15 yrs since hist post and precious nothing has happened to bring us any closer to a working QC. Contrast that to how long it took between the discovery of Uranium and Enrico Fermi's team successfully engineering the fission bomb.

  3. Indeed QC and Openflow have a lot in common. They both create stories that have emotional appeal, they both require invention of new physics, and they're both filled with grand vision, grandstanding, and empty promises. But there’s no shortage of Phds, high hopes, cash infusion from VCs, and a Cambrian explosion of research papers, many of which content is not even worth the papers it’s printed on.

    From what I see, Openflow or SDNv1, is just a rebranding movement to create hype and gather fund for areas that need no change and no new buzzword. Network management/automation, virtualization, and even whiteboxing, were all a natural evolution of the virtualization resurgence in the 2000s following their server counterparts, and OF added nothing to the paradigm to further our knowledge, except proving the obvious that centralized control plane and even centralized controller done literally, just don’t scale. It requires the invention of new physics to work efficiently, and what a shame its proponents, a lot of them high-end academics who did bleeding-edge research in networking and were supposed to know better, only realized this a few years down the track. OF originators probably believe in Multiverses and think OF can be made working in one of their Many Worlds. And when it turned out to be a fiasco due to the need for new physics which doesn't exist in our Universe, the SDN hype got re-associated with all other sorts of things like network automation, and then programmable data plane. In recent years the disease of programmable data plane seems to be in full swing; they seem to be the next big thing, and they want network computing, probably taking advantage of the great power of AI, to be the next paradigm shift.

Add comment
Sidebar