change topic All topics

Kerckhoff's Principle

Kerckhoff's Principle

How to test a cryptosystem and in which sense is cryptology much better than rocket science?

Few pieces of technology are so ubiquitous and, at the same time, so arcane and wrapped in myths and misconceptions than cryptosystems. The unique aspects of cryptology, and the ways in which it impacts users and society at large, yields a very unusual and indeed fascinating life cycle of development adoption, dissemination and obsolescence of their objects, the main of which being the cryptosystems. In this article, we are going to look into and explore ramifications of Kerckhoff’s Principle, a principle stating, paradoxically, that these awe-inspiring techniques used to conceal and control information are rather friendly to openness and free share of knowledge.

Asymmetric or public-key encryption schemes are highly elaborate constructions both mathematically and conceptually. A cryptosystem consists of a family of pairs of cryptographic algorithms that are instantiated by parameters called keys. Each instance is determined by a private key, from which it is easy to derive a public key, from which, by its turn, it is unfeasible to derive back the private key.

As their names suggest, the algorithms instantiated by private keys perform tasks intended to be of private use, and therefore their keys are supposed to be kept privately, while the public keys instantiate their respective counterparts that are meant to be publicly available, and, therefore, can be published. In the case of encryption schemes, the public keys are used for encrypting messages that can only be decrypted by its respective private counterpart. In the case of signature schemes, the private key is used to issue digital signatures. A digital signature, say, s = sign(k_pri, doc), issued by a given private key, k_pri, is a mathematical object that, with the public key that corresponds to k_pri, say, k_pub, can be verified to only be feasibly produced with the signing algorithm, sign, applied to doc, the information to be signed, and k_pri. This verification can be represented by the expression verify(k_pub, s) = true. If k_pub is substituted by another key or if another information not belonging the image of sig(k_pri, . ) replaces s, the result yielded by algorithm verify(. , .) would be false.

With the explosion in Internet dissemination in the mid 90's during the dot com bubble, public key cryptosystems eventually became an ubiquitous building block of daily life. From simple Internet browsing, to all forms of online or digital payment, to digital documents and cryptocurrencies, all are made possible by the nearly ubiquitous — and, nonetheless, grossly underused — powers of public-key cryptosystems. The lay public has, indeed, a lot to gain from understanding more about the fundamentals of this technology: consider, for instance, that were the understanding of it widely disseminated, password based online authentication protocols would be rendered completely obsolete, in favor to much more secure and practical others based on digital signature.

Life cycle of a public-key cryptosystem

Experience shows that it is exceedingly difficult to prove the inexistence of efficient deciphering methods, aka cryptanalyses, to one given public-key cyrptosystem. A typical public-key cryptosystem’s life cycle, therefore, is shaped by constant educated guessing on the likelihood of a cryptanalysis first: actually existing; and second: given that it, in fact, exists, being discovered soon enough to curtail the actual employment of the cryptosystem. As we shall see, this helps establish an alignment of incentives that favors disclosure and preservation of technological and scientific knowledge.

As knowledge applicable to the cryptanalysis of each given cryptosystem is accumulated over time, deciphering methods with increasingly low complexity arise and the perceived likelihoods of a polynomial cryptanalysis existing and being found shortly rises. Another way to describe the life cycle of a public-key cryptosytem, is that it can only be deemed secure if and when there is enough knowledge to conclude an effective cryptanalysis for it is very difficult to be found, but not enough to suspect such cryptanalysis is likely to be found soon. Finally, in order to a cryptosystem to be deemed viable, it must also be competitive with respect to the complexity of the best deciphering methods when compared to other cryptosystems, as well as with respect to consumption of computational resources.

Kerckhoff’s Principle

If a cryptosystem is opted to be published, an interesting alignment of interests happens. Each individual cryptologist being both A: prompted to advance and publish knowledge on whatever given cryptanalysis and B: unable to prevent any other cryptologist from doing the same, aiming for instance, at extending the window of secure usage of the given cryptosystem; causes a Nash equilibrium in which the individuals (attempt to) do exactly that. In the opposite case, the developer of a cryptosystem can instead resort to security by obscurity --- ‘An adversary will have a much harder time cracking your codes if they don’t even know your methods’. --- In the last scenario, however, if any individual user of the obscure method sells it out to an adversary or third party, they concentrate themselves on the benefits of the sell out, and therefore they are in a Nash equilibrium in which all are prompted to do that. Eventual downsides of the loss of exclusivity of the method to the community would typically be diluted by the size of said community, and / or even negated by role of the betraying member, who could, for instance, be an impostor adversary or be motivated by vengeance.

Kerckhoff's Principle.png

Notice that, in case the doctrine of security by obscurity is chosen and the designed system fails, the attacking side can try to carefully exploit and prolong the mistaken perception of security by the defending side. The more heavily their security depends on obscurity, the worse that will be for them. That paradigm was masterfully exemplified in the movie, “The Imitation Game”. In it, the British intelligence is narrated to have chosen not exploit as much intelligence as possible yielded by the cracking of the Enigma Machine by the team lead by Alan Turing. Their goal was to conceal from their enemies the fact that they had such capacity, all so that this advantage could be milked for as long as possible. Had the opponent relied on public, time proved methods only (if they indeed existed at the time), possible vulnerabilities of their systems would have been much more likely to eventually leak into public knowledge, and therefore to themselves as well.

The same logic described to deliberate actions by the users logically also holds true to accidental leakage of the obscure method. Once more, the larger the community, the more unavoidable the prospect of leakage becomes a tragedy of the commons, like in ‘If eventual loss of secrecy is so likely, why would I try so hard to do my part to keep the secret? If and when it leaks it will be doubtful the responsible for it will be even known, let alone held accountable any way.’. Conversely, the smaller the intended community is, the less likely it will be that paying the price of developing an entire system from the scratch for exclusive use will pay off. Finally, neither does it cut any ice to try to purchase such development by a non involved party with no skin in your game ---How could you guarantee they haven’t sold the same method to somebody else previously? You can not! --- In the end, the role of obscurity within the realm of information security is restricted typically to either individual use by the very developers themselves (that would do better know very well what they are doing!), or small communities of experts with skin in the game. In any case, however, typically as a dispensable, extra layer of security, whose compromise would not be catastrophic. --- Remember that next time you consider an extravagant out-of-the-box password arrangement for yourself, but we shall leave that for another article. --- With time, the doctrine of rejection of security by obscurity became widely accepted and is known as Kerckhoff’s Principle, after the XIX century Dutch cryptologist August Kerckhoff.

How Cryptology Compares with Obscure Technologies

A fascinating side effect of this Kerckhoff’s principle is the alignment of interests towards the development, publicity, and preservation of acquired knowledge. Notice that the cryptology community will always be prompted to close the windows of usability of cryptosystems still under use. Consequently, it will never cease putting effort in inventing new cryptosystems to replace existing ones, come their obsolescence.

Let us now compare that with technologies of other natures, say bellic, naval or aerospace: contrary to what happens to a cryptosystem, in order to test a ship, a rocket or a nuclear bomb, one doesn’t need to publish their blueprint and wait to see whether adversaries detect a flaw. On the contrary, part of the motivation of pursuing those technologies is precisely the possibility of having exclusive access to them. This fact inevitably lead them to be obscure.

In fact, the losses of technologies of Saturn V’s engine, XVI century Portuguese caravels and nuclear capability by South Africa would, in my opinion, be perfect examples of that. What is particularly serendipitous about it is that, in a way, it is the employment precisely of cryptographic techniques that helped ensue the obscurity and consequent losses of those other technologies.

This website uses cookies to ensure you get the best experience on our website. Learn More
Learn More