Can someone help me understand Linux containerS?

https://www.reddit.com/r/homelab/comments/ce50yg/can_someone_help_me_understand_linux_containers/

Can someone help me understand Linux containerS? I’ve been in IT related fields my whole 12 year career, mostly as a network engineer. I’ve used Linux and Windows for decades, but I’m having a hard time wrapping my head around what Linux containers are, why they’re useful, and how they work. Does anyone have any simple to understand resources?

6 · 13 Comments Section -ah • 7y ago It’s OS level virtualisation, essentially a way of packaging an application or system, with all the components it requires but still running it natively on the host albeit with a layer of isolation from the rest of the system (provided IIRC by the host kernel). As it’s running natively on the host rather than full virtualisation it’s a bit quicker than a more standard VM, and it doesn’t need to include everything (so it’s not another OS image..).

As to usefulness, well they allow for rapid application deployment and as they come with all of the dependencies you need you don’t need to worry about maintaining everything independently or deal with conflicts.

Realistically the benefit is in the whole ecosystem though, so you have your containers, and then management layers above that help to deploy and manage them.

Redhat have a decent overview of functionality and what they are..

From a personal perspective, they do have a part to play and like VM’s they offer a way of simplifying deployments and creating somewhat hardware agnostic environments. However I’d argue that like VM’s they get overused and often lead to issues around people not being aware of what they are actually using and their dependencies, throw in that they add complexity and end up with essentially a security compromise (The isolation is a good thing, but the added complexity can outweigh it if you aren’t sensible). You also need to trust the source of any containers you are running.

7 CalJebron OP • 7y ago Thanks very much for the write-up. Is there such thing as a Windows container? How do you know what components are in a container or are part of the underlying OS? Say I deploy a Linux container for a LAMP server (is this a thing?), do I still have to manage it exactly the same way as a standard LAMP installation or is it simplified somehow?

1 -ah • 7y ago Is there such thing as a Windows container?

Yeah, there are Windows Containers (although you’d need to run a Windows host..). I think you can run a Linux container on a Windows host, but that’s essentially sat on top of a Linux VM.

How do you know what components are in a container or are part of the underlying OS? Say I deploy a Linux container for a LAMP server (is this a thing?), do I still have to manage it exactly the same way as a standard LAMP installation or is it simplified somehow?

One of the more common containers you’ll see is essentially a web application in a box, it’ll come with the web application, the web-server and any other elements (database etc..) all included, all ready to go. You don’t then need to install any of the individual components, but.. (and this is the bit that usually makes me balk..) you are generally at the mercy of whoever put together the container in terms of how that LEMP/LAMP stack is put together etc..

As to a straight up LEMP ‘container’, that seems to be a common use case too, and they are apparently easy enough to built (see here - https://linoxide.com/containers/setup-lemp-stack-docker/).

I’ll add the caveat that I haven’t used that approach, all my webservers are either dedicated on hardware or in VM’s (if I need to move stuff between hardware..) so I have no experience maintaining them..

As to what is in each container, you can check, or obviously you know what you need when you build it. I think it essentially comes down to everything on top of your running kernel, with some ability to share resources (as you would under any other circumstance..). The point of a container however is that it includes all the bits it needs and you can manage that independently of the host.

Edit: For clarity, the container would contain the binaries and libraries it needs, so it’s own version of NGINX, Mysql and so on.

3

3 more replies u/wingerd33 avatar wingerd33 • 7y ago To me it’s easiest to start with the why. Unfortunately in this case, there are several, but I’ll cover two well understood scenarios. I’ll speak in the context of Linux kernel namespaces, as containers are really just a wrapper and tooling around namespaces.

– 1 –

Let’s say two processes are running on the same machine. One is a web server, and the other is a file server that you use to store personal photos.

Someone exploits a vulnerability in your website, gains system level access to your server. You don’t want them to see your personal photos.

Sure you could solve this with good file permissions, but at that point they’ll probably fire off a privilege escalation attack and have root.

Kernel namespaces in Linux provide isolation for the various resources available to a process: IPC, network, filesystem, etc. So two processes running in separate namespaces don’t know about each other’s resources. Even one process running as root - if the namespaces are configured correctly, the root user inside the container can be mapped to a bogus user on the host which has zero privileges.

– 2 –

You want to limit the resources a process can consume. CPU time, RAM, disk IOPS, network bandwidth, etc. Obviously we’ve been doing this with VMs for a while. But VMs are not the best we can do. If you run 10 VMs on a host, that host consumes the amount of RAM necessary to run 11 operating systems. Every one of those needs all the package updates, consumes CPU time, and must be maintained as a full OS.

Namespaces are cheap. Linux cgroups (control groups) allow you to put resource limitations on a process/service without all the extra overhead of a VM. This means, a machine that could run 10 VMs may be able to run 50 containers.

There are several other use cases for containers, centered around efficiency and consistency. But these examples should help you understand what they are. I’ve typed enough so I’ll let someone else explain developer productivity and environment consistency.

3 u/keypress-alt-f4 avatar keypress-alt-f4 • 7y ago Just to clarify, you mean containers that share the underlying host OS or virtual machines that are just software emulations of physical computers?

1 CalJebron OP • 7y ago I work with virtual machines a lot, both VMware and Hyper-V, so I understand them. Containers are what I’m more confused with.

2 u/keypress-alt-f4 avatar keypress-alt-f4 • 7y ago Oh God, don’t I know it. Confusing as F. I have yet to find a really simple explanation. I understand them now pretty well - well enough to leverage in architectures, but I’m not adept enough to explain them well to others.

1

1 more reply u/matthewZHAO avatar matthewZHAO • 7y ago Linux container basically is similar to like vms but instead of running another kernel, its sharing the kernel with the host os. So it becomes much more efficient. But it still is its own discrete env

1 _kroy • 7y ago For me, I think about the two main types of containers differently:

LXC. I treat these like mini VMs. The difference is they are tons lighter on resources.

Docker. I treat these more like apps. For example, if I wanted to spin up an nginx server, I would pass through 80/443, and config it so my logs/config/docroot all lived under like /nginx on the host. So instead of digging through /etc/nginx, /var/log, /var/www, etc, everything would be centralised. If I wanted to try a bleeding edge nginx, I would just change the container I was pulling.

1

Updated: