No, it is not a horse. It is "Secure Transmission of Encrypted Electronic Data."
After an update Tuesday the gpg GUI Kgpg failed to run. Gpg still worked from the Konsole.
I purged Kgpg and installed Kleopatra.
It installed "The STEED Self-signing Nonthority" X.509 certificate.
Never heard of it before, didn't know what it is or what it is for.
Here is what I found: https://lwn.net/Articles/464137/
Now I know why Thunderbird used a gpg key or would accept a certificate.
After an update Tuesday the gpg GUI Kgpg failed to run. Gpg still worked from the Konsole.
I purged Kgpg and installed Kleopatra.
It installed "The STEED Self-signing Nonthority" X.509 certificate.
Never heard of it before, didn't know what it is or what it is for.
Here is what I found: https://lwn.net/Articles/464137/
If everyone has a key, and other users' keys are easily retrievable via a DNS query performed transparently by the MUA, then email encryption and digital signatures would work smoothly. The remaining problem in the scheme is how to authenticate the key or email address of a remote person — particularly one that has not made contact in the past. After all, an attacker could intercept DNS queries and spoof an identity or perform a man-in-the-middle attack against a legitimate-looking contact.
The existing email encryption schemes tackle this problem with PKIX and WoT. But the authors cite a PGP usability study that indicates that these trust models are confusing to users:
Both systems require a significant investment by the user: X.509 asks the user to sink money into the artificial certificate market that provides a dubious return, while OpenPGP asks the user harder and harder questions about the trustworthiness of peers away from the center of his personal web of trust.
Furthermore, they add, neither trust model matches up with users' natural expectations when using email, because both defer trust decisions to someone else. PKIX defers all trust judgments to an external authority, while WoT defers it to peer recommendations. In both cases, a binary trust determination is made before the communication is even read: "Neither system utilizes the users own experience with the peer in the context of the communication happening over time."
STEED's trust model is "trust upon first contact" (TUFC), which accepts the certificate or key of the remote party upon first contact, but persists and tracks it for the user. This is the trust model used by SSH, the authors note, and is what "virtually all users do anyway, when faced with the task to make a trust decision that interrupts their line of work." In other words, TUFC exists outside of an external "trust infrastructure," and leaves it up to the user to verify suspicious first contacts through other means (in person, phone calls, etc.). After the first contact, the system helps the user by flagging changed or revoked keys.
The existing email encryption schemes tackle this problem with PKIX and WoT. But the authors cite a PGP usability study that indicates that these trust models are confusing to users:
Both systems require a significant investment by the user: X.509 asks the user to sink money into the artificial certificate market that provides a dubious return, while OpenPGP asks the user harder and harder questions about the trustworthiness of peers away from the center of his personal web of trust.
Furthermore, they add, neither trust model matches up with users' natural expectations when using email, because both defer trust decisions to someone else. PKIX defers all trust judgments to an external authority, while WoT defers it to peer recommendations. In both cases, a binary trust determination is made before the communication is even read: "Neither system utilizes the users own experience with the peer in the context of the communication happening over time."
STEED's trust model is "trust upon first contact" (TUFC), which accepts the certificate or key of the remote party upon first contact, but persists and tracks it for the user. This is the trust model used by SSH, the authors note, and is what "virtually all users do anyway, when faced with the task to make a trust decision that interrupts their line of work." In other words, TUFC exists outside of an external "trust infrastructure," and leaves it up to the user to verify suspicious first contacts through other means (in person, phone calls, etc.). After the first contact, the system helps the user by flagging changed or revoked keys.
Comment