This is the talk page for discussing improvements to the Conjugate gradient method article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1Auto-archiving period: 2 months ![]() |
![]() | This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
|
|
This page has archives. Sections older than 60 days may be automatically archived by Lowercase sigmabot III. |
The explanation skips a large important part of the correctness proof, which shows that the mentioned "Gram-Schmidt orthonormalization" only needs to consider the latest direction p_k and need not be performed relative to all previous directions p_j where j < k. This can be deduced from the fact that the r_k and p_k span the same Krylov subspace, but it should be highlighted how this implies that p_j r_k = 0 for j <= k and p_j A r_k = 0 for j < k. — Preceding unsigned comment added by 46.128.186.9 (talk) 16:04, 13 December 2023 (UTC)
In your algorithm, the formula to calculate pk differs from Jonathan Richard Shewchuk paper. The index of r should be k instead of k-1. Mmmh, sorry, it seams to be correct! ;-) —The preceding unsigned comment was added by 171.66.40.105 (talk • contribs) 01:45, 21 February 2007 (UTC)
additional comment:
basis vector set $P = \{ p_0, ... p_{n-1} \}$.
In the subsection, I can see that riT rj = 0. The following is quoted from the same. However, in the algorithm, in the calculation for the value of {\beta}, the rkT rk comes up in the denominator, which does not look right. In my openion, the denominator should be rkT rk+1. However, I could be incorrect, and hence, I would like to bring this to the notice of the experts. — Preceding unsigned comment added by Zenineasa (talk • contribs) 13:25, 26 November 2021 (UTC)
I can't wrap my head around the fact that it's optimizing
Abstractly, I think (?) the idea is that we choose such that
and so the residual vector is the gradient of the surface. But coming at it from the perspective of "let's minimize the residual", my line of thinking is let
which we could drop the constant from. That's similar but decidedly different from . What gives? Is minimizing the residual? Can the norm of the residual be massaged to look like ? —Ben FrantzDale (talk) 21:30, 18 January 2023 (UTC)
The method is supposed to be used for PSD matrices, but the example given in "A pathological example" is not PSD. Thomasda (talk) 17:47, 23 January 2024 (UTC)