[Freedombox-discuss] Identity management

Daniel Kahn Gillmor dkg at fifthhorseman.net
Wed Feb 22 19:15:52 UTC 2012


Hi Mike--

(i'm one of the monkeysphere devs, and have an interest in seeing this
freedombox thing succeed too)

On 02/22/2012 12:30 PM, Mike Rosing wrote:
> I looked at MonkeySpheres and PGP (and GPG) and I have a philosophical
> question about "Box identity" and "User identity".  The details of GPG
> and PGP are the use of large primes which are not humanly possible to
> remember.  This forces the use of some disk storage for secret keys.

yes, this is true.

> One of the main arguments for using elliptic curve crypto is that any
> key can be used.  Usually it is a hash of a pass phrase (and one can go
> nuts dealing with pass phrase security too, but let's not go there for
> now).

This is a bad idea if you actually care about the strength of your key.
 FWIW, it's also possible to use a user's password as a seed to a PRNG
to generate an RSA or DSA key.  This doesn't make it a good idea.

> The fundamental philosophy is that the User identity is never
> stored except in the user's head.  This is very different than the way
> GPG and PGP are set up.

OpenPGP as a cryptosystem (and GnuPG as an implementation of it) is
malleable enough to have the user's identity stored in their head.  the
trouble is: for precision storage of high-entropy data, most human heads
just aren't particularly capable, and a brute-force machine can pretty
rapidly exhaust most human minds.

> My personal feeling is that it is far safer to not have any tie between
> the person and digital media.  A person's secret key can be derived
> every time they need it, on any device using a simple hash function. 
> This allows multiple identities very easily (so long as the person
> remembers the pass phrase for each identity).  This makes the secret key
> ephemeral as far as hardware goes, which makes the system safer from
> post mortem attacks.

This also encourages the use of arbitrary local machinery, into which
you type your "only-in-your-head" secret.  Now, there's a copy of this
secret in the machinery you just used.  was it an internet cafe?  was it
a friend's machine?  was it your boss's machine?  how do you know that
machine is not recording what you entered?  If your local endpoint is
compromised, you've just lost control of your identity.

The natural response to concerns about compromised local endpoints is to
have a trusted physical console [0].  Once you have a TPC, though, then
the idea of relying on your mind as a source of high-entropy data is
rather redundant.  You're maintaining and monitoring your TPC; why not
use it as an effective cognitive prosthetic on the network?

> The other problem I've had with PGP and GPG in the past is that it
> requires the user to understand what the security system is doing.  I'd
> rather see an "invisible" security system.  It might be more complicated
> internally, but from the users perspective the security system should
> just work, or it should just fail.

Security is a process, not a magic sauce.  It only works because people
have to understand at least the outlines of what it's doing.

A classic example of this is the web browser model.  Many people don't
understand when they should even be looking for the "lock" (or whatever
UI variant the browsers have decided a valid https session should
present as today).  They also don't understand that the lock itself
doesn't mean "the site is trustworthy", it just means "the site is
actually www.example.com".  These are useful clues that can permit
people to have a cryptographically-safer browsing experience (CA cartel
issues aside), but *only* if they understand what the UI clues are and
why and when they should be relevant.

Browser developers know that if they make their browser "just fail" when
there's a problem, the user will pick another browser that lets them
work around the failure ("oh, i can't visit that site with chrome, but
it's fine with IE").  Is the user more secure as a result?

The same is true for real-world security too, fwiw.  A security policy
that outlines a set of steps to be followed only increases security if
the people using it understand why they are doing what they're doing.

Humans are a critical part of any security system.  We need to make
systems that expose the security features to the users in ways that they
understand, can relate to, and are engaged by.

I'm not talking about the math or the algorithms, of course; i'm talking
about the expected properties of the information in transit.  People
need to know things like:

 0) Am i anonymous in this communication?  or have i claimed an identity
(and which one)?  if i've claimed an identity, have i proved that i am
that identity, or is it just an asserted-but-unproven claim?

 1) Do i know who sent me the data i'm looking at (or listening to)?
Who is the sender?

 2) Do i know who i'm about to send data to?  Can anyone other than the
recipients i know about view the data i'm sending?

if the user doesn't know or think about those questions, there is no way
that a cryptosystem can fix things for them.

Regards,

	--dkg

[0] http://cmrg.fifthhorseman.net/wiki/TrustedPhysicalConsole



More information about the Freedombox-discuss mailing list