This article is within the scope of WikiProject Cryptography, a collaborative effort to improve the coverage of Cryptography on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.CryptographyWikipedia:WikiProject CryptographyTemplate:WikiProject CryptographyCryptography
Suggestions: tables on this page would greatly benefit from having a column expressing the number of bits in the hash function outputs. This permits comparison of the ratio "security/space used". In almost all cases, when the algorithm is sound, using more space will create more security, which is connected directly to a main point of hashing, saving bit space in a secure description. 192.235.78.16 (talk) 05:14, 22 November 2016 (UTC)[reply]
I'm curious if there is some way to either split a few pages or some method to keep pages with the same content up to date. For instance there is at least:
Cryptographic_hash_function
Comparison_of_cryptographic_hash_functions
Hash_function_security_summary
They aren't always all in sync with each other and that's not to mention the pages for each hash function. Thoughts?
Quelrod (talk) 18:24, 31 July 2010 (UTC)[reply]
Merging this page with Comparison of cryptographic hash functions would be a first small step. I don't see a reason for having two comparison pages. Updating the comparison pages more conservatively might help too. I.e., has it been verified that the 251 against SHA-1 really has this complexity? The author clearly states in the paper that the result is based on heuristics. On the SHA-1 page it is always possible to include papers with uncertain results with a short comment. On a page with summaries, however, this leads to oversimplification. 85.1.63.54 (talk) 06:10, 1 August 2010 (UTC)[reply]
@Quelrod: I have merged all content from Comparison of cryptographic hash functions table to here. I think this format allows a more detailed approach than the other comparisons. It prioritizes the number of broken rounds over time complexity — an attack that doesn't break all rounds isn't actually a "successful" attack. And it allows memory complexity to be taken into account too.
I used personal judgement in some cases. For example the Tiger collision attack [1] from there actually only speaks about pseudo-collisions, which I think isn't the same as a proper collision. The novel MD4 attacks reported in [2] are interesting, but if you take into account their memory or precomputation requirements, then the original 2008 attack appears cheaper. I would be happy to discuss these if someone has an opinion.
As for IP user's comment about verification, such confirmations don't usually come about. I think the best we can do is to make sure that the papers come from reputable journals or conferences, which have peer review processes in place to prevent errors.