SciPost Submission Page
Residual Matrix Product State for Machine Learning
by Ye-Ming Meng, Jing Zhang, Peng Zhang, Chao Gao, Shi-Ju Ran
This Submission thread is now published as
|Authors (as registered SciPost users):||Chao Gao · Yeming Meng|
|Preprint Link:||scipost_202209_00017v2 (pdf)|
|Date submitted:||2022-12-09 07:46|
|Submitted by:||Meng, Yeming|
|Submitted to:||SciPost Physics|
Tensor network, which originates from quantum physics, is emerging as an efficient tool for classical and quantum machine learning. Nevertheless, there still exists a considerable accuracy gap between tensor network and the sophisticated neural network models for classical machine learning. In this work, we combine the ideas of matrix product state (MPS), the simplest tensor network structure, and residual neural network and propose the residual matrix product state (ResMPS). The ResMPS can be treated as a network where its layers map the "hidden" features to the outputs (e.g., classifications), and the variational parameters of the layers are the functions of the features of the samples (e.g., pixels of images). This is different from neural network, where the layers map feed-forwardly the features to the output. The ResMPS can equip with the non-linear activations and dropout layers, and outperforms the state-of-the-art tensor network models in terms of efficiency, stability, and representation power. Besides, ResMPS is interpretable from the perspective of polynomial expansion, where the factorization and exponential machines naturally emerge. Our work contributes to connecting and hybridizing neural and tensor networks, which is crucial to further enhance our understanding of the working mechanisms and improve the performance of both models.
Published as SciPost Phys. 14, 142 (2023)
Author comments upon resubmission
Dear Editor and referees, we are grateful for your comments and feedback on our manuscript. Accordingly, we have modified the manuscript and mention the changes we have made.
We thank the referee for pointing out the weaknesses. As the referee mentioned, bound dimension $\chi$ is still valid to measure the representative power of MPS. On the other hand, the pruning test suggests an alternative approach to truncating an MPS, that is, limiting the total number of variational parameters that actually contribute to the prediction accuracy. This leads to a new variant of ResMPS called "sparse ResMPS", whose representative power is characterized by M rather than $\chi$. Based on the above discussion, we have revised section 2.4.3 accordingly.
- Please refer to the Weaknesses discussed above.
- We concur with the referee's assessment that our argument was lacking in support. We have removed it in the revised version.
- We agree with the referee's point of view. TN can be used to represent linear maps, but the essence of TN is more general, which is far beyond what we concluded in this sentence. Therefore, we corrected our statement.
- We thank the referee for carefully identifying our problem, we fixed this issue in the revised version.
- As the referee's suggestion, we swapped their positions accordingly.
- We fixed this issue.
- We fixed this issue.
List of changes
1. Line 48. We corrected the issue in the description of TN as a linear map between quantum states.
2. Lines 77-78 & Sec. 2.2, Sec. 2.3. We swapped section 2.2 and section 2.3, and updated the structure of the paper paragraph correspondingly.
3. Line 108. We added the missing "not" to the original text.
4. Line 116. We corrected the reference to figure 1(f) to correctly refer to figure 1(e).
5. Line 184. We removed the final sentence that lacked sufficient support.
6. Sec. 2.4.3. We completely rewrote the subsection.
7. Lines 361-362. We corrected the description of the entanglement entropy decay of ResMPS.
Submission & Refereeing History
You are currently on this page