Keeex uses an innovative way to chain cryptographic hash functions to improve their resilience against potential future vulnerabilities.
This algorithm combines at least two digests in a way that limits the control an attacker could have over hash computations in the future.
Assuming:
The output value is computed as follow:
This allows for efficient parallel computation of most of the digest, while also making the output of each stage dependent on both the digest algorithms' configuration and the previous stages.
The hash identifier is an UTF-8 string (in case a future algorithm's name would extend beyond the basic ASCII range), where it is built from the names of each used algorithm separated by the < character.
It is currently built with the names of the following supported digests:
sha1sha224sha256sha512ripemd160sha3_224sha3_256sha3_384sha3_512keccak224keccak256keccak384keccak512A potential weakness of such an algorithm is that an attacker could replace the selection of hashes, potentially just using one algorithm, which would greatly defeat the purpose.
To that end, this specification imposes two things: it ensures that a multihash always uses at least two algorithms, and it is used in a context that imposes a lower bar restriction on the algorithms used.
In the case of KeeeX, the current metadata format used in keeexed files requires at least two algorithms, ending with sha3_256 and using at least one other algorithm.
The current specification defaults to sha3_256<sha256.
One of the major risks with using a digest function to represent data is that an attacker could substitute a set of legitimate data with another, while retaining the same digest value.
While this is theoretically possible, it is considered improbable for the suite of digests currently in use.
This, however, may not hold forever.
Chaining the digests in a way that limits the control an attacker may have on the actual input is done in an attempt to push the risk of a valid secondary preimage further away from the realm of possibilities.
By limiting the control over the digest input in a way that also leverages the output of another digest, while also imposing a minimum number of stages and a specific type of algorithms used, we estimate that the risk of a string of flaws or exploits leading to a successful second-preimage attack is lowered significantly.