![]() | Low-density parity-check code received a peer review by Wikipedia editors, which is now archived. It may contain ideas you can use to improve this article. |
![]() | This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||
|
This topic is in need of attention from an expert on the subject. The section or sections that need attention may be noted in a message below. |
For a person who has not studied information theory, the introduction chapter says absolutely nothing. It's some state of the art code, rrright. For what/Where/Why is it used? I think the introduction should answer these questions first. Then it's okay to jump into the technics. Besides, what's Forney's factor graph notation anyway? Maybe an article about it should be created, or at least Forney linked somewhere. (Elsewhere than to Forney, Texas). --ZeroOne 19:52, 12 Oct 2004 (UTC)
I'd agree with the first comment. I am not an information theory specialist and have been trying to get an idea what advantage LDPC gives over eg Reed Solomon, but cannot find this basic information anywhere. Andrew P, UK.
Not impressed with the article either. But in answer to your question: LDPC codes approach the theoretical (Shannon) limit as the block size increases. (I.E. They are the best (in terms of error recovery) that an error correcting code can be.) A Google of LDPC turns up a lot of good articles. Nahaj 15:25, 8 November 2005 (UTC)
Although Gallager's original codes used a randomly generated parity-check matrix, this doesn't have to be the case. A subset of Gallager codes can be built analytically by an algebraic method based on shortened Reed-Solomon codes with two information symbols. LDPC codes can also be constructed algebraically based on the points and lines of finite geometries. The highest performance LDPC codes are constructed using a method known as density evolution.
Density evolution is a measure of the code, not a method of code construction. To quote Moon's "Error Correction Coding": "Density evolution is an analytical technique which can be used to understand limits of performance of LDPC decoders. " Although, obviously, ANYTHING that measures a code can be used to AID in generating good codes. (And there have been many articles on building various subsets of Gallager codes by many many different methods.) If the author of the anonymous comment above would like to provide a *current* reference for their interesting claim that "The highest performance LDPC codes are constructed using a method known as density evolution.", I'd appreciate it. Work on finding GOOD non-randomly generated LDPC codes is still ongoing in a number of places... I'd recommed the "IEEE Trans. Information Theory", and the "IEEE Trans. Communications" as jumping in points. Nahaj 17:18, 16 January 2006 (UTC)
The reference I had in mind when making the statement about "The highest performance LDPC codes..." was S.-Y. Chung, G. D. Forney, T. J. Richardson, R. L. Urbanke, "On the Design of Low-Density Parity-Check Codes within 0.0045 dB of the Shannon Limit," IEEE Communications Letters, vol. 5, no. 2, Feb. 2001. The authors develop an improved implementation of density evolution called discretized density evolution. Using this algorithm and an improved optimization algorithm they design good rate-1/2 irregular LDPC codes for binary-input AWGN channels that approach the Shannon limit very closely, i.e., to within 0.0045 dB. I'm not sure whether this is regarded as a current reference or not. Duncan Aitken, UK.
The claim was made that "The highest performance LDPC codes" are constructed by the method. I'm still waiting for a reference supporting that extreme claim. When it comes to "highest", or best, or any other such term for an algorithm, the status can change rapidly... and only a current article can tell which is currently best. An article from several years ago [ which is very good, by the way ] showing that a method was good doesn't address the issue whether it is the method that currently produces the highest performance codes. Nahaj 15:32, 18 February 2006 (UTC)
There are two separate issues here: the origin of good parity-check matrices, and the current "best" performance.
Edratzer 17:17, 17 September 2006 (UTC)
Other than the mention that LDPC is adopted for satellite broadcasting it would be useful if someone added more text on other applications of LDPC. In particular, any thoughts on comparsions with turbo codes in "real world" cases (mobile, etc.).--Alistair9210 08:42, 4 July 2006 (UTC)
Is not LDPC used in 802.11n? 192.114.175.2 (talk) 11:57, 5 December 2007 (UTC)
Yes its specifiyed in the HT of 802.11n wich codelength between 648 and 1944 bits. —Preceding unsigned comment added by 131.188.138.93 (talk) 13:19, 13 August 2010 (UTC)
I think some of the details need to be better written here. The variable 'k' is not even defined. Also, the sentence "or there must be an even number of odd values" is confusing. Instead of writing that, should write something straight-forward, such as - the "Exclusive OR" of all the bits linked by the lines will be logical 0. Also, as if by 'magic', the matrices "P" and "I" just pop out of nowhere, and nobody has any idea what they are. KorgBoy (talk) 08:10, 25 May 2017 (UTC)
I think H should be reduced to -P^T | I_{k} and not (n-k). Because the number of rows of H is actually k. It works for this example since n-k = k here, but for general n and k, it will not work. — Preceding unsigned comment added by Swaprava (talk • contribs) 17:53, 5 January 2020 (UTC)
Similarly, G will be starting with I_{n-k} and not k. LDPC codes encode n-k bits into n bits, right? — Preceding unsigned comment added by Swaprava (talk • contribs) 17:56, 5 January 2020 (UTC)
Were LDPC codes "forgotten"? Gallager was a famous name in the information theory community in the 1970s and 1980s, so I doubt they were truly forgotten; they just couldn't be practically used. Calbaer 01:03, 21 July 2006 (UTC)
In the paper that effectively reintroducted LDPC codes (MacKay and Neal "Near Shannon Limit Performance of Low Density Parity Check Codes") there is a brief mention of this topic. The suggestion is that a slightly related family of codes (concatenated codes) were believed to be better and hence LDPC codes were ignored. This paper cites personal communication from Gallager on this topic. Edratzer 17:25, 17 September 2006 (UTC)
LDPC codes were actually rediscovered one year before MacKay and Neal, by Wiberg, Loeliger, and Kötter in "Codes and Iterated Decoding on General Graphs", published in 1995. This was triggered by the publication of Turbo codes in 1993, which sparked the new interest in iterative decoding algorithms. This page should be updated with a reference to that paper. Nicwi (talk) 21:21, 6 March 2016 (UTC)
I have just reverted the addition of a link added by an anonymous user to a University repository of various LDPC papers. The repository does not appear to be a general source of reference or contain VHDL code that may aid the new reader. I propose adding a link to [1] instead as this page contains open-source hardware source code. Any comments? Can the anonymous link adder come forward? Edratzer 08:07, 8 January 2007 (UTC)
Apparently, some diagrams were deleted due to no source because the diagram creator did not use a *-self template, making this article incomplete because the text depends on them. Could someone who knows about LDPC codes redraw the missing diagrams? Jesse Viviano 07:24, 26 September 2007 (UTC)
Can someone let me know what diagrams need redrawing? There is a pretty good chance that I can redraw them. User dfhayes 24/04/2008 —Preceding unsigned comment added by Dfhayes (talk • contribs) 11:03, 24 April 2008 (UTC)
I think we should the decoding part should be divided into two parts: one for the erasure channel and another for the decoding of soft messages. I'm quite new in wikipedia, but I'm working on LDPC codes and I think I can help to improve this part. —Preceding unsigned comment added by Cunchem (talk • contribs) 12:30, 2 September 2008 (UTC)
I think that the title of this page should also be Gallager Codes so that it will come up easily in an internet search. I don't know how to do this or I would do it myself —Preceding unsigned comment added by Lansey (talk • contribs) 09:12, 4 June 2008 (UTC)
Thanks, this wiki pages now comes up first for a search of Gallager Codes, thanks --Lansey (talk) 10:07, 6 July 2008 (UTC)
Hi all. As far as i know, check matrix can not be represented as just by changing the row-column order. Because almost every column contains j 1's, there is not enought columns with single 1. Example is not good enought. Vlsergey (talk) 20:47, 28 August 2008 (UTC)
In this case is also called a generator matrix of the code. --Cunchem 08:46, 3 April 2009 (UTC) —Preceding unsigned comment added by Cunchem (talk • contribs)
Citation 5 on carrier recovery doesn't seem to be significant in the LDPC coding arena (Google Scholar gives 7 citations of which 2 are self-citations). I note that the Userid that added the section is very similar to the name of one of the article authors. Edratzer (talk) 21:51, 23 January 2009 (UTC)
This section adds no information on LDPC encoding or other properties. Information on Generator and Parity Check matrices and deriving the code using these belongs to an article on general FEC coding. And I am pretty sure that in practice you don't do a large size, sparse(!) matrix multiplication, but rather sum up the check nodes along the edges in the graph. Please remove this unhelpful stuff. The section may rather tell something about the structure of LDPC codes, as bipartite graphs, regular or irregular etc. Give links to derived codes, such as Tornado codes for the BEC etc.
Similar to above, this section should write something about decoding algorithms, such as message passing etc. This section states that decoding an LDPC code is NP-complete. Well, this is certainly not true for a binary erasure channel, and ironically the example given discusses just that channel.
Does the subsection on lookup-table decoding need a citation? Or does anyone know of some elaboration on the method described? I'm not sure how a 1024-bit table would help decode an LDPC with a 1024-bit block size. (I may be misreading that section; it could probably be cleaned up anyway; e.g. "very high iterations" could be "many iterations" or some more fluent wording.) — Preceding unsigned comment added by 173.73.92.75 (talk) 03:30, 15 February 2014 (UTC)
The section on lookup-table absolutely needs a citation. How can any machine store a lookup table for arbitrary 1024 bit strings, much less a microcontroller? -EdwinOlson — Preceding unsigned comment added by 64.94.31.206 (talk) 23:13, 24 June 2014 (UTC)
Sentence two in the introduction: isn't that kinda obvious? Sentence three: data transmission rate? please be precise: code rate. Turbo codes can also be classified as LDPC codes. How are they differentiated? The article states three different years for the invention of Gallager codes: 1960, 1962, and 1963. The article should further cite different variants of LDPC codes in practice.
Sorry if points are rather rough, it's late, I'm tired, and I am not an expert on coding. Anyway, thanks for giving a check. G'night ;) Nageh (talk) 22:39, 22 May 2009 (UTC)
The third sentence of the introduction ("LDPC was the first code to allow data transmission rates close to the theoretical maximum") does not really bring any valuable information. It all depends too much on what is meant by "close to the theoretical maximum". Either one should give a precise value like "within XX dB from theoretical maximum" with a corresponding reference, or one should lose the sentence altogether. Kaioshin82 (talk) 13:19, 30 July 2009 (UTC)
I have put a sentence in the introduction saying: "LDPC codes operate on the same principle as Multidimensional parity-check codes except that LDPC codes form the Parity bits from groups of randomly selected data bits" MDPC codes being much simpler make it easier for the beginner to grasp the central idea being used in LDPC codes. At least one person has disputed the connection between MDPC and LDPC codes. I cannot imagine how anyone could fail to see the connection. I am starting this new section in order that we may arrive at a consensus on this issue. just-emery (talk) 23:43, 6 June 2009 (UTC)
If my statement had simply been that they operate on the same principle and nothing else then you might, arguably, have had a point but my statement as it now reads is: "LDPC codes operate on the same general principle as Multidimensional parity-check codes. The main difference between them being that LDPC codes form the Parity bits from groups of randomly selected data bits and are therefore able to achieve much closer to the Shannon limit due to the greatly reduced number of cycles within their graphs." It is the difference that is being pointed out as being meaningful. Furthermore MDPC codes are more intuitive and easier for beginners to understand so pointing out the difference between MDPC and LDPC codes should help them understand it better. That is the purpose of the article is it not? just-emery (talk) 19:45, 7 June 2009 (UTC)
I've removed the new paragraph for now, as I think we should be sure on the details first (and probably need someone like User:Edratzer who really knows what they're talking about to help!). Also, this is material that should really be somewhere other than the lead, as the lead isn't supposed to introduce material which isn't then discussed further down. Oli Filth(talk|contribs) 21:23, 7 June 2009 (UTC)
Regarding comparing the 2. Do you not admit that you could create an ldpc matrix that was equivalent to a mdpc code? so in effect you can think of an mdpc code as a ldpc code whose matrix satisfies certain constraints. so the fact that it uses a matrix doesnt change anything. hence the connection between them. its the randomness of the matrix that is important. just-emery (talk) 22:32, 7 June 2009 (UTC) Since that is the only difference between them then it must be the reason for teh increased performance of ldpc codes.
The above discussion is clearly getting nowhere, so I'll repeat my challenges to the material again. If the answer to any one of the following questions is "no" then the material should be removed (although I suspect the answer to all three is "no"):
Oli Filth(talk|contribs) 23:38, 7 June 2009 (UTC)
I am confused by your recent edit... In the parity-check matrix, there will be either 1's or 0's. The null space of the matrix defines the graph. I don't see why there should be "probabilities" in the matrix for syndrome computation. And syndrome computation tells you normally whether decoding was correct (syndrome is 0) or not (not 0). Please justify your edit (or anyone else), either on the talk page or as a reference, or I will revert your edit. Thanks, cheers, Nageh (talk) 18:14, 20 October 2009 (UTC)
Trivial, I'm sure, but...
In the introduction to the article, it refers to Gallager developing the LDPC concept in his doctoral dissertation at MIT in 1960.
However, in the History section, it states that LDPC codes were first developed by Gallager in 1963. Reference 6 states 1963. Which is true?
David Kelsey — Preceding unsigned comment added by 101.178.141.19 (talk) 00:54, 27 September 2015 (UTC)
Hello fellow Wikipedians,
I have just modified 5 external links on Low-density parity-check code. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
Cheers.—InternetArchiveBot (Report bug) 14:14, 7 January 2018 (UTC)
The decoding section directly jumps to technicalities and state-of-the-art improvements while never really introducing belief propagation and bit flipping algorithms which I believe are not that hard to understand.
The part about seeing an LDPC as independent SPC codes and using SOVA or BCJR is confusing. I believe it tries to draw an analogy between turbo codes or convolutional codes and LDPC codes. It currently has no reference and the only references I could find on applying those algorithms to LDPC were in really specific constructions with structured codes or concatenations.
The part about lookup table has no reference either. I have no doubt that a bit flipping algorithm can run on a low power CPU and that some computations can be avoided by using a lookup table (for the check nodes computations for example). But I am not sure that this is what the author meant and if this is what was meant I don't see how it is relevant to detail the lookup table and the usage of the FIFO.