Re: More desktop security thoughts (was Re: GNOME privilege library)



On Fri, 2005-01-14 at 16:42 +0000, Mike Hearn wrote:
> On Fri, 14 Jan 2005 10:18:18 -0500, Sean Middleditch wrote:
> > This is wrong.  The problem is that you are assuming all users are
> > ignorant and thus punishing the ones that aren't.  If all users are
> > effectively running as UID 0 then any malware that gets installed (say,
> > by a child grabbing something off the net, or a bug in your browser that
> > allows code injection) also has UID 0.
> 
> Yes UID 0, if they are running as a totally new process or whatever. So
> yes they could do whatever the user could do, install new software, touch
> others files etc.
> 
> How does this happen? If bad software gets on your system and the user has
> told the computer to run it then we already lost. Bugs in browsers happen:
> we have SELinux to give damage control, and funky patches like execshield
> and ProPolice to try and stop it happening. Children installing malware
> happens, but I already talked about fighting malware in another mail. 
> 
> If it's just general distrust of children then you're presumably wanting to
> configure exactly what they can and cannot do at a fine level of control
> which makes you a sysadmin who wants MAC :) 

And MAC is really just another way of saying that you are *not* running
with superuser privileges, but only the precise privileges you need, and
that processes that need privileges are given them while others are not.
In other words, only RPM can modify /usr, and only if run as a certain
user/role.
> 
> But yes there could definitely be a nice SELinux GUI or whatever that lets
> you say "confine user XYZ to their home directory" if you want to let
> them play in a sandbox.

Right.  Let each system be configured to its needs, with the defaults
being of the most use to the largest segment of the user base.

> 
> > If you separate the privileges and require that gaining the "install
> > software" privileges requires that the code path go through a particular
> > utility, like a single software installer, you can put checks in one
> > place and know they can't be circumvented.  For example, sure a malware
> > could craft an RPM on the fly but can it sign it with a key you trust?
> 
> Well this is heading in the right direction of removing authentication
> for home users with locked down software, but ....
> 
> I've never, ever seen a desktop operating system that managed to actually
> succeed with a single software installer. RiscOS got the closest but it
> was a much simpler system. MacOS hasn't, Windows never even tried, despite
> best efforts Linux failed miserably at this too. So any security system
> that relies on only one program being able to install software just isn't
> going to work.

That's a different problem.  Personally, I'm of the opinion that package
manager software authors need to actually do some real design for once,
make a system that works, and be done with it.  If you have two
different tools - one for system packages and one for user applications
- that's fine.  There's still a stark difference between "two utilities
can modify /usr" and "any user can modify /usr at any time with any tool
they want".

> 
> Installing stuff is just another way of reconfiguring the system anyway,
> for software that happens to need it. If the user is in control (because
> it's a private system that they own) they should be able to configure it
> in any way they see fit. Not all software needs to be installed.

Right.  My point is that that user can be in control without you needing
to gut the system.  You can take a stock Red Hat 8.0 system and put a
single user in full control without recoding anything and without having
them run as root.

> 
> > And yes, there is a distinct difference between installing stuff in /usr
> > and installing it in $HOME in the event that you have several accounts.
> > If my girlfriend somehow gets a trojan installed into her account it
> > will only affect her account and not my files - imagine if my
> > (relatively) computer-inexperienced girlfriend's actions could infect
> > the source code repositories I have access to from my account!
> 
> OK, I still think the default should be shared as I think most users do
> not have security-sensitive information or data in their $HOME :) But OK
> there should be some simple way of using ACLs from Nautilus (in such a
> setup).

I'm not talking defaults in this case - I'm saying that if you strip the
security system out of the core OS to make things easy you make it
impossible to secure it again.  There is absolutely no reason every in
any way to ever give user A the ability to modify user B's files unless
they are using a special tool or running in a non-standard role.  You
don't have give user A ability to modify user B's files just to make the
system easy to use, and you don't have to redesign UNIX to make the GUI
easy to use.

> 
> > Slightly off-topic, but my only "fear" with replacing DAC with MAC is
> > that the current MAC-for-Linux system around that's got any traction is
> > SELinux, and even the SELinux gurus seem to have trouble getting a
> > general-use working system locked down effectively with it.  It's too
> > complicated to configure and use.
> 
> Yes, this worries me too. I don't know if SELinux is badly designed or if
> MAC in general is just a very hard problem, or even if Red Hat are being
> overly ambitious. I'm not aware of any attempts to do something like this
> before.

The biggest beef I has is really just the configuration system is too
low-level.  It's like the difference between programming in C and
programming Python.  When I configure some new task for MAC, I just want
to state "it can do this in this case and that case and can't do
anything else."  With SELinux, you have to think in terms of how you
structure that - you basically end up needing to "program" the security
versus just stating what you want.

You could fix most of this by just coming up with a far simpler
configuration language and writing a pre-processor that transforms it
into the current macro language.  Build a GUI tool on top of it and
you're mostly there.

Then it's just my worries about the system state and the declared state
in the configuration getting out-of-sync - if we cna guarantee that
doesn't happen, then it's not an issue, I guess.


>  
> > Basically, be pro-active about security.  When your choices are a) make
> > it simple for the user or b) make sure their credit card isn't stolen,
> > you really need to go for b.  Easy is worthless if all you can do is
> > easily get screwed.  Computers exist to help people, not fool people
> > into thinking they're being helped.  Remember, we're talking about "Ease
> > of use" and not "ease of misuse."
> 
> The flip side is that somebody, somewhere will always offer an easy way
> out. I tend to agree with Havoc that manual security is no security at all
> because once it becomes too invasive or difficult users will just switch
> it off. If there's no way to switch it off, they'll use something else to
> get their work done.

Once again, it isn't black and white.  Manual security *forced* on users
will get ignored.  A user who *wants* manual security should have that
option.  And quite simply you will never have perfect security without
manual intervention.  There is no way for the system to know, for
example, that the binary you grabbed is a trojan - only you the user can
possibly make that decision.  If you try to force the user to make a
decision they'll usually make the wrong one and use bad habits.  If you
have an educated user you have to let that user make a decision.

Say a friend points me to a site to grab a new app.  I go grab it and it
starts to install.  It's not signed or has an invalid signature - the
average user will just install it anyway, yes.  But, if you remove the
ability to cancel based on the invalid signature, then I the educated
user have no ability to inspect or stop it.

There's a big difference between automatic security and ignoring
security.

> 
> A really secure but unusable box is just a paperweight .... but this is
> the tradeoff software designers have been making since the beginning of
> time :)

This isn't a trade-off from the design perspective.  When you consider
that bugs exists, then yes, security is impossible - I'm going to assume
that the design doesn't bother with bugs because we assume they get
fixed and when they don't the MAC system will isolate their damage.

The technology can't fix the social engineering (or ignorance) problem.
That's where the design comes in.  The security needs to be as automatic
as possible.  However, if you make it so that the design assumes the
social problem can never be fixed and ignores it then you have not only
given up security but also usability.  While you need to be automatic
wherever possible, where user-education can help you need to make sure
its possible.  Just because one user will always make the wrong choice
doesn't mean that the wrong choice should be forced on the educated
user.




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]