Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How is Docker.io different from a normal virtual machine? (stackoverflow.com)
308 points by jaynate on Sept 13, 2013 | hide | past | favorite | 106 comments


Docker doesn't add a whole lot over what basic Linux containers (lxc and vserver) have offered for years. Having said that, the main benefit to Docker is a change in viewpoint from "virtual machine" to "application". Docker aims to make applications portably deployable to any Docker-machine. Since Docker uses lxc (aka Linux containers), it helps to understand a little how containers are different from other virtualization.

Conceptually, they are similar to Linux's chroots or FreeBSD's jails, which offer process isolation. Basically, they work with a lightweight virtual machine instead of a single process. Containers have lower overhead - they are virtualizing on the operating system level. Other virtualization technologies like Xen and KVM work on the CPU level, and provide a fully virtualized hardware setup to the virtual machine[s].


To be honest I've never worked out why Docker gets so much press. If you use the Ezjail utility to configure and manage FreeBSD jails you have been able to do most of the things Docker does for years (stacked fs using unionfs, templates/flavours, snapshots, export/import etc) and this seems like a much simpler and more stable solution. The networking stuff is also easy using pf.


There's also Solaris Containers, which leverages ZFS snapshots, etc.:

http://en.wikipedia.org/wiki/Solaris_Containers

Of course, Solaris is even more despised by some people since they changed hands.

I think Docker appeals more to people, because:

- It's on Linux, which is more popular than FreeBSD or Solaris.

- It's very easy to set up and configure.

- Integration with Puppet et al.

- The have great marketing :).


Just a small reminder that you don't need to touch Solaris specifically. There's a heavily-developed OSS fork named illumos out there too. Some of illumos' distros are nicer than Solaris ever was.


All hail the whale.


Because almost no one around here uses FreeBSD it seems.


Some of us do, when appropriate.

Docker took a technology known to many that setup/admin/manage machines, added some fluff, made things simpler, and marketed the idea.

In a crowd that might spend more time thinking about nodejs and callbacks vs promises or how easily one can tip a rails app up on heroku, existing systems tools for things like jails/virtualization may either be over looked or not a concern.

For every docker, there are people (of which I may be one) that think , "big deal, it is just x". Meanwhile that thing is getting traction and popularity. It might not last, but it is around and making noise now.


"added some fluff, made things simpler, and marketed the idea"

Well what on earth do you need a pre-assembled computer for? You just take some circuit boards and a soldering iron, then assemble everything over the weekend and you have a perfectly good computer. I don't understand why this Apple I thing is so popular, it's nothing special.

Far as I can tell, every successful piece of tech out there can be reduced to "Added some fluff, simplicity, and marketing to an already existing technology"


They also created an operative system (sure, it builds on existing tech but they added a lot more than just fluff).


It is a simple abstraction of something that many people find very complicated - or choose to not spend time understanding / configuring.

(You could say same about Github, Travis etc.)


There's probably some fair value in making things simpler, not to mention value in marketing. If they can form a large community around it through their marketing, then the tooling will rapidly improve. So, the marketing can be a kind of self-fulfilling prophecy.


Honestly, I have no idea if the software I write runs on FreeBSD. I know it runs on Linux, and therefore I know I can package it with Docker.

And I think that's the crux of it--do you think anyone would use Heroku if its container environment were FreeBSD-based? I'm guessing not; nobody develops on FreeBSD, so they'd have to test on a separate FreeBSD VM before deploying.


Lots of people seem to develop on Macs, so Linux is a VM to them. And if it runs on a Mac it probably runs on FreeBSD. Joyent seems to be managing with non-Linux containers too.


That's a shame, FreeBSD is actually a pretty impressive OS. You should give it a try, mono-cultures aren't a good thing.


Because Docker is platform agnostic and easy to use. There has always been similar solutions but none of them with such simple interface and setup.


Containers are not virtual.

Containers aren't emulating anything or translating anything. They let your partition out system and network resources as you see fit (or to protect users from abusing each other) without running kernels within kernels and other performance killing hokum.


Indeed. One way you can use Docker in production[1] is one container per machine: even when you're not using the containers to split resources, the ability to snapshot and move an application and all of its dependencies in a single, lightweight, easily deployed package is very exciting. And because there's no translation etc. happening, running a single container is pretty much identical, performance-wise, to running it directly on the machine[2].

[1] dotCloud don't actually yet recommend running Docker in production, but if you did...

[2] This was part of what Mailgun (part of Rackspace) said in their presentation at the Docker workshop in SF today.


You can actually do the same thing with LXC containers; it is trivial to rsync a snapshot or compressed archive of a snapshot to another host machine and run it there.


Why the downvote? Simple tools always work well. If you want process isolation on Linux, LXC + rsync might as well be the simplest route.


Docker is based on LXC containers. It bundles it's own management layer that replaces the "rsync" solution you propose.


The point of Docker is the standardized container/management interface. Of course you can write your own, but then it's not too standardized is it?


What you describe is standardizing complexity where a simple solution is available. Naturally, when feasible, I prefer keeping things simple and straightforward enough that standardization is unnecessary.

Standardization for the sake of standardization is not defensible.


Anything not pro-docker usually gets downvoted by shykes and his ring of cronies.


There's definitely hype behind Docker, but this is a ridiculous accusation.


People do crazy things for money.


You have rationalized away the people who disagree with you by making them into a conspiracy. I don't know about said ring of cronies, but I guarantee that many downvoters are not associated.


My claim seems validated. Just look at both of my comments in this thread; they've both been downvoted deep into the negatives.


My downvotes because the comments were disparaging and lacked proof. My only Docker affiliation is reading a bit about it lately. I had to search the page to see who shykes is.


Regarding your first footnote:

Is there an explanation why 'in production' is actively discouraged? Is that a limitation of the underlying lxc stuff?

I ask, because the interesting parts of docker _for me_ seem to be focused on the setup, deployment - and not the runtime. So if I create a working image and want to use that in production, wouldn't docker become passive as soon as I run that thing?

What's the danger here?


I expect the danger is that someone will lose money and blame the developers for as-yet unforeseen issues which can only be smoked out after a few years of heavy use. But a specific danger would be interesting to hear.


Docker maintainer here. That's basically it. There's no specific danger that we're worried about - just engineering best practices.

As long as we're not comfortable operating Docker at large scale ourselves (we run quite a lot of containers in production at dotCloud), we won't recommend that others do it.


Containers are not virtual.

They belong to a category of virtualization known as 'container-based virtualization' which implements virtualized perspectives of the system within the host kernel, effectively dividing the system in to multiple systems (from the perspective of the affected processes). That is unquestionably virtualization.

I believe a statement closer to what you were looking to express was containers are not running under a hypervisor.

LXC still creates a performance impact, depending upon which options are selected, with particular note for the various memory accounting options. However, that impact is far lower than a hypervisor. In addition, startup times are vastly reduced since the kernel bootstrap and hardware detection concerns are rendered unnecessary.


I don't know about LXC, but back in the day, the impact of linux-vserver was within bounds of measurement error. Since then I have been using it reflexively on any server I touch, even if it ends with only one container.


Yes, Docker builds on top Linux containers but they seem to be releasing some neat features that make containers easier to use, manage, and reuse. For example: The docker-cluster project, https://github.com/globocom/docker-cluster seems very interesting as it might allow you to abstract the host with a cluster/logical group. Having this abstraction is key if you are running docker on a non-virtualized (bare metal) host because it provides some level of fault-tolerance against hardware failures. Can you do that with Linux Containers?

Edit: I just realized that docker clusters is not being developed by the docker team


If you're interested in cool projects built on top of Docker, take a look at http://github.com/dockerforge, it has a nice list.


You can also have a look at VxDocker https://github.com/websecurify/node-vxdocker


Exactly a point that is important to make is that LXC VMs are bound to same kernel and architecture as host machine.

But then the "fix" is to have a farm of host machines of all supported configuration of kernel and architecture (mostly just Linux distributions). Migration and load balancing will be more rigid as can't move any machine to any host. Some hosts

What is not doable it seems is say supporting windows guests, older Linux kernels etc. That argues for a hybrid approach with a controller API on top that uses KVMs or Xen hypervisors alongside LXC


Why are you building your applications to need specific kernels/distributions? That sort of goes against the spirit of 12-factor apps -- if a particular kernel/OS is part of the app, put it in the app. (Thus, use a full VM.) Docker is for when that isn't the case.


Well because for one it was tested and developed on one distribution. Because one might have chosen to use system level packaging instead of copying code to /var/local or /opt. So it is taking advantage of transactional updates to the system, transitive dependencies, pre-post install scripts. The downside it being tied to a packaging system.

Also certifications. Government agencies for example will accept only certain OS-es have been certified. Sometimes it is simply because there are features on some distribution or kernels that aren't in others.


Basically, what you're saying is, Docker sucks at being a foreign porting target for things developed for some other system. Well, everything sucks at being a foreign porting target. Don't do that. Test and develop your app using Docker. Install your app into the container using Docker. These are the things Docker is for--it's a development aid, not some performance-boosting alternative to virtualization. You have to integrate it into your app's workflow; you can't just tack it on as some final "and then we also generate a Docker container version of our app" step at the end, or you lose every advantage Docker gives you.

Docker is made to, basically, develop apps the same way you develop them when using a PaaS like Heroku (or, more specifically, a PaaS like Dotcloud): have a frozen base+runtime image; compose a "slug" consisting of exactly the stuff in your build/ directory and layer it on top; and tell the target host to launch it. To upgrade, create a new slug, and rolling-restart your old instances into new instances.


I think you misunderstood. I never said Docker sucks. I said there are ways to handle various kernel+distro combinations with Docker just by having various hosts that runs those kernel+distro combinations, that was in defense of LXC.

I haven't used Docker yet, I am only familiar with LXC so far so I was commenting on that. If one does have a uniform or a restricted set of platforms that can also act as LXC hosts then well why not take it and run with it. It is more efficient and that might translate into performance and cost savings.


I recently wrote an article that covers some of this ground: http://www.sitepoint.com/docker-for-rubyists/

The basic idea behind Docker is that you don't have to create another operating system in order to just separate your processes from each other. This leads to containers being much more lightweight than virtual machines but also significantly less powerful (i.e. powerful as in ability to do something, not in terms of performance) in some areas.


Any chance you can elaborate on:

"(i.e. powerful as in ability to do something, not in terms of performance)"

Do you mean smaller units of functionality which perform at good levels? For example, I wouldn't want to deploy a large, monolithic service this way?


One thing that occurs to me: containers don't get their own network stacks, so you can't use a transport-level protocol (e.g. SCTP) in a Docker "guest" if it isn't programmed into the Docker host kernel. Whereas VMs are routed to at the network level, so they can do whatever they want with the packets they receive.


I meant to say that VMs can do a deeper level of process isolation. They also perform complete hardware virtualization, which means you can run a completely different OS inside the VM. However, in terms of performance, VMs are not necessarily faster than containers at all tasks.


The need for containers is more narrow in scope than simply the size of them, but that they allow very simple isolation on a file-system level. It allows a very strict separation of concerns without requiring the sacrifice of resources of spinning up another VM.


I've been having trouble figuring out the value-add of using Docker over Ubuntu's built-in LXC functionality [1].

[1] https://help.ubuntu.com/12.04/serverguide/lxc.html


I also found myself asking this same question, and after careful consideration I ended up choosing LXC over Docker, and here are some reasons why:

    - LXC works fine on it's own.

    - Docker has it's own bugs, so you get all of the
      Docker bugs in addition to potential LXC bugs.

    - IPTables routing for containers to the outside
      world isn't that hard to manage.

    - LXC is simple and straightforward, and by
      comparison Docker is a convoluted confusing
      mess of additional layers of complexity.

    - LXC is already used in many real-world
      applications for operational software
      everyday.
If you want to know anything else about real-world usage of LXC, please feel free to contact me (jay at jaytaylor com), or check out my relevant project: ShipBuilder [1].

[1] https://github.com/sendhub/shipbuilder


Shipbuilder = webapp-focused PaaS built from haproxy + LXC

Docker = app-focused container API thang built from LXC with a historically strong aufs+ubuntu focus and limited support for more exotic setups

LXC = do anything the hell you want


Looks like ShipBuilder has overlap with Docker, if not a direct competitor. A disclaimer wouldn't have hurt in my opinion.


I don't follow. ShipBuilder uses LXC and is a complete open-source self-hosted PaaS; a Heroku-clone. How is it a Docker competitor?

I cite it merely as an example of the sorts of cool things which are possible with LXC.


Truthfully, you plug ShipBuilder more frequently than I am comfortable with. It makes your related comments seem disingenuous.


Truthfully then by your own moral high bar, Shykes plugging of Docker should make you really uncomfortable. If you however allow yourself to take a step back and realize these threads are directed at real people trying to help each other with real problems, (Though arguably some do it for money) Then you can allow yourself to see it as helpfulness.

Because in reality, What is personal gain of plugging an open source free product that helps people?


Fully agree!!


Here's another Stack Overflow question which addresses that exact question: http://stackoverflow.com/questions/17989306/what-does-docker...


Ignore everything about the way the Docker daemon is currently implemented; it's irrelevant to what Docker is. Docker is a container file-format standard (and a container registry-service protocol), allowing you to build single VM-like images on one computer, and then run them on another, with the only thing in common being that they both support "Docker container format."

Right now, the only thing that supports Docker container format is Linux's dockerd implementation, which happens to use LXC+AUFS. Later on, Docker containers could be deployed "baked" into Xen images, or (if all the code inside them is architecture-neutral) deployed onto a FreeBSD dockerd that uses jails, etc.

Docker is about the tools to construct and manage containers, not the specific technology behind the deploy-target; the deploy-target is effectively commoditized by having a common container format!


I read some more about Docker since posting the parent. The deal-killer for me is that Docker doesn't work on btrfs, and I've been running on btrfs for a while now. Apparently it's actually bugs in the aufs kernel code when using it with a btrfs filesystem.

I found a closed issue [1], apparently Docker is currently focusing on refactoring the existing code to have a more extensible plugin-style architecture, and will eventually make a btrfs plugin. That's probably better in the long-run for the project, since it means they won't duplicate work twice, but for right now, it means that my setup isn't supported.

Also, support for non-amd64 hosts or guests is highly unsupported and somewhat not-working right now, and networking configuration leaves something to be desired.

So I'd say Docker's still really, really immature.

[1] https://github.com/dotcloud/docker/issues/443?source=cc


You'll be happy to know that btrfs support will come come through devmapper by 0.7.


I started reading that documentation. It's excellent, and I think possibly the longest piece of Ubuntu documentation I've seen.

OTOH, docker lets me go *docker run ....." and have all of that done for me. That's where the value-add is for me.


I Would love to migrate 50+ KVM VMs to LXC-Containers, but there seem to be some problems left with security[1][2]. I cant wait to get my hands on Docker, but I lack the SELinux knowledge to secure everything the 'proper' way.

Is LXC (and therefore Docker) really ready for Production yet?

Edit: Formatting.

---

[1] http://mattoncloud.org/2012/07/16/are-lxc-containers-enough/

[2] https://blog.flameeyes.eu/2010/06/lxc-and-why-it-s-not-prime...


It depends on how you are using containers. If you control what code is run in them and who has access to the containers and their hosts, then production use should be fine as far as security goes.

However, if you're trying to run something which lets untrusted people login to the containers or run arbitrary untrusted code in the containers, then I certainly wouldn't recommend doing that with containers in a production environment.

One project you might like to keep an eye on is CoreOS [1]. As I understand it, their goal is to create an OS which will come configured to safely run containers. Once it is ready I would expect it will be suitable for use in a production environment.

[1] http://coreos.com/


I really don't like giving up the isolation of modern hypervisors, particularly those with Intel virtualization extensions. Docker (and LXC) seems like a huge step backwards for security. I'm sure there are use cases, but I'd never multi-tenant with it.


> I really don't like giving up the isolation of modern hypervisors

You don't have to! Think of docker as a unit of software delivery, rather than resource allocation. It's very common to use Docker to either a) deploy only trusted containers on the same machine, or b) deploy only 1 container per machine.

There are also cases where linux cgroups and namespaces are an appropriate security mechanism (usually combined with other best practices, like apparmor/grsec/selinux, network lockdown, active monitoring, running things as non-root etc.) but it's not mandatory.

Here's our latest overview of container security: http://blog.docker.io/2013/08/containers-docker-how-secure-a...


How about OS patching? If I am running hundreds of different containers and I need to patch the OS (let's say upgrade the kernel or a driver), will I affect hundreds of applications at once? If so this will be a problem for several shops.

How about built-in failover? On a virtualized environment you can run a cluster and the VM will move to another host in case of failure. Does docker support that? Is that what the docker-cluster project (https://github.com/globocom/docker-cluster) is about?


OS updates (updates to files inside containers) would be on a per-container basis.

Kernel upgrades would affect all of the containers running under that kernel (machine or VM) at the same time. Though, if you wanted to be super cautious, you could upgrade kernel on an empty container host (quite easy if virtualized) and migrate containers to it and test them on an individual basis.


I agree with you, but at the same time a I run a "private cloud" with 100+ containers at any one time. We do "sort-of" multi-tenant: There are multiple customers, but we run all of their services, so nobody outside of our organization needs os level access to the vm/containers.

For "real" multi-tenant systems, I'd want full VMs, but as noted elsewhere, you can run Docker containers inside a VM, and still benefit from sharing resources by carving up a large VM into many smaller, isolated subsets.

We run OpenVz today, but I'm following Docker closely as we plan to migrate to LXC, and then going for Docker might very well be the best alternative.


> Docker (and LXC) seems like a huge step backwards for security.

Sry but link says it all. No further comment from me: http://marc.info/?l=openbsd-misc&m=119318909016582&w=2


VT-d, VT-x. 2007 != 2013. The number of hypervisor exploits is far fewer than the number of local root exploits on various shitty OSes (including OpenBSD).


Do you have a link for this statistic? Since I don't know of a local root privilege escalation since several years in OpenBSD, this is a quite high mark.

Edit: this is not a os-or-vm problem. You will have local problems and now, in addition, rooting a server may give you access to even more servers that run on your hyp.


I don't think that your link disagrees with the OP at all. Yes, bare metal is more secure than hardware virtualization; but hardware virtualization still provides greater security than kernel virtualization.


I always wanted to ask a question about docker, if the local devel machine is ubuntu 12.04, I can not deploy my docker image build to a 10.04 ubuntu server, right? (Unless you run a 12.04 virtual machine or something.)


Yes you can do that, but it only supports 64bit operating systems at the moment and you need a kernel that supports linux containers & union file systems.

I doubt there are packages for 10.04 so you'd be on your own getting it working


There are no official images for Ubuntu 10.04, but you can create your own with:

    debootstrap lucid ./rootfs && tar -C ./rootfs -c . | docker import nl/ubuntu-lucid
You can then run 10.04 containers with:

    docker run -i -t nl/ubuntu-lucid bash


That's really useful - I've often wondered how to do it, and I've never seen it put so succinctly.


Yes you can. Docker doesn't care about the underlying distro, as long as it can run on it. You can build a container on a Red Hat host machine, and transfer it to an Ubuntu host machine - it will run just fine on both.


But Docker can't run on 10.04, right?


It depends on the kernel version rather than the distro version. So if the 3.8 kernel is compatible with the 10.04 distro, you'd be fine.

Sorry I don't actually know if that kernel is compatible with such an old distro. I sort of doubt many people would have been interested enough to do the backport...


I haven't tested docker on Ubuntu 10.04. But you can probably expect the following:

1) You will need to boot a 3.8+ kernel, which is definitely possible but probably not available as a 10.04 package (unless someone has backported it).

2) Docker has a few userland dependencies as well. Most of them are extremely stable (tar, iptables, ip). But the lxc userland scripts have changed a lot in the last couple years. Docker is known to work with version 0.8, and that version might not be available in Ubuntu 10.04.

In short, I expect that docker will not work out of the box on a vanilla 10.04 system, but it can be made to work with a fairly small amount of customization.

If you're interested in trying it out, feel free to join the #docker IRC channel on Freenode. We'll help you out!


The kernel version has to be the same.


No, it just has to be modern enough to support LXC/Docker.


No there isn't a hosted child kernel. What the parent means is that the guest and host are using the same kernel instance, and the distro needs to be able to handle that to run.


One of the issues I found with contributing to open source is the time it takes to get a build environment up and running. Since different people face different kind of issues and projects usually lack an exhaustive documentation, I've always felt adding a light weight image of the build environment could help. I hope in future Docker or similar projects pave the way for it.


Not sure if you are talking about production or only development environments, but Vagrant seems to provide a good solution for that: http://www.vagrantup.com/


I thought docker just makes creating, deploying and managing LXC "enabled" applications easier. Do they add anything to the LXC ecosystem other than the online sharing of containers?


I'm merely an observer since Docker interests me, but I from what I gather the magic of Docker is LXC and AuFS combined.


Does github add anything to git other than a multi-tennant gitweb with a prettier interface?

git: worth nothing.

github: worth a billion dollars.


I'm sorry but git is not "worth nothing," it's just that it's a public good and doesn't belong to anyone to sell, hence it has no market value. But consider how much software companies would pay not to have git taken away from them and then consider how much they would pay not to hake github taken away[1]. Which is harder to replace? I'm betting on git.

[1]: Imagine a hypothetical scenario where github had mercural as an alternative (for the case git was taken away.)


Whoa. I am not questioning the business model of something I am obviously not familiar with. It was more of a technical clarification/question since the topic is purely technical.


Compare this with vagrant


If you have a Linux host, I highly recommend vagrant-lxc. This adds the speed/memory benefits of lxc/docker to the awesomeness that is docker.

You don't have docker style provisioning/overlays, but that may be an advantage, especially if you already have provisioning scripts compatible with vagrant.

To oversimplify: vagrant is for development, docker for deployment. But that's oversimplified, there's lots of overlap.


Vagrant mostly just generates virtual machines (with the option of running a provisioner), so it would basically be the same comparison.

Edit: I suppose you could be using Vagrant to provision VPSs and use your provisioning tool to deploy an app in one fell swoop, but most people don't reprovision a box every time they redeploy their software. Vagrant lets you build a base box, Docker is for deployment on top of that box.


I also thought that another difference was that vagrant can also manage cpu-level (hypervisor?) VMs (as opposed to just linux containers) - one of the main use cases for vagrant would be running it on your local computer- a laptop running osx or windows for example. Correct me if I'm wrong, but you wouldn't be able to run docker on a windows laptop, b/c you're just containerizing the parent os.... I would be curious to see how this could run on osx.


I've ran Docker on top of Linux, powered by a Vagrant VM just fine. It's exactly what their tutorial walks you through: http://docs.docker.io/en/latest/installation/vagrant/

I don't think OSX or Windows will run linux containers ever though. Maybe something conceptually similar, but I doubt it would be "Docker".


Since vagrant spins up full VMs, which need to boot, etc. it's slower. Spinning up VMs with vagrant, in my experience, takes tens of seconds to minutes. Launching an docker app in a container takes a few seconds (allegedly... I've never actually tried it myself).


Actually the typical start time of a docker container is in the 10-100ms range :)


I wonder how Microsoft's Drawbridge OS (http://research.microsoft.com/en-us/projects/drawbridge/) will compare to LXC, and the Docker APIs? Currently Drawbridge looks like it's lacking adoption, and doesn't seem to be widely available. Regardless, the container model looks like it solves a lot of PaaS security issues without the overhead of VMs (Iaas).


Ha! Crazy to see a question I asked 5 months ago pop up on Hacker News.

The docker.io team has said that they don't consider it to be production ready [0]. Has anyone experienced any major problems? Anyone using it in production?

[0] http://blog.docker.io/2013/08/getting-to-docker-1-0/


I found myself asking the same question. See my related comment: https://news.ycombinator.com/item?id=6378823


Holy cow, the unit test case is fantastic.


It is a good example, but I wonder how licensing would treat it. If I'm running hundreds of unit tests, each against a snapshot of my database, and my database is Oracle, they would likely view that as hundreds of instances which would each need a license.


There are Oracle licensing options to do this per CPU rather than per instance. That is what many people do who run Oracle farms on vSphere do to take advantage of the consolidation to reduce overall license costs.


Unless your application is tied to Oracle specific extensions you should think about using postgreSQL for your dev and testing environments. They both hew pretty closely to the sql standard; and there are versions ( EnterpriseDB ) that have an explicit Oracle compatibility layer that works.

And I bet it feels really good to look the Oracle salesperson in the eye and say, "We've been doing most of our dev work on Postgres lately."


Part of Team Foundation Server for quite some time now.

With Team Foundation Server you can set up a build that ramps up Hyper-V instances with build results.

Sure it is all Microsoft stuff, but the concept is nothing new.


but aren't Hyper-V instances way more heavyweight?


When using Hyper-V, Windows works like the good old mainframes virtualization.

The hypervisor takes the OS role, and what you see as main OS is actually a guest OS as well, that has control privileges over the other virtualized instances.

I never used this type of CI build, so I cannot speak much about the real resource usage.


How is this different from HPUX or Solaris Package managers? Asking to learn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: