These answers go with our interactive quiz on server-virtualization. If you haven't taken the quiz yet, take it now.
1. On a particular server, within each virtual machine:
- You can run any version of Windows without regard for the version(s) running in the other virtual machines
- The versions of Windows must be no more than one release level apart
- The versions of Windows must be exactly the same
Server-virtualization software imposes no constraints on the versions of Windows Server operating system (or Linux) that you place in each virtual machine (although a completely new version of Windows may require that you check for compatibility with your server-virtualization software before you install it). Some data centers still have old, no-longer-Microsoft-supported versions of Windows, such as NT4.0, running in virtual machines -- in order to support old application software that the business groups have not yet had time to remediate for a newer Windows platform. Some caution, and careful testing, is recommended where external interfaces on the server (for example, USB or FireWire) are not fully supported by the version of Windows installed on a virtual machine and/or where there are known support issues for these interfaces with the version of VMware or other server-virtualization software being used.
2. On a particular server:
- You can reboot a virtual machine without it having any effect on the other virtual machines
- If you reboot one virtual machine, all the other virtual machines reboot at the same time
- If you need to reboot one virtual machine, you have to first reboot the physical server: the individual virtual machines and then reboot automatically when the physical-machine reboot is finished
Rebooting a virtual machine can be done without touching the physical machine or the sever-virtualization software. It has no effect on the other virtual machines: they are completely isolated from one another. Note, however, that if you reboot the physical machine (that is, you reboot the server-virtualization software) it will disrupt the operation of all the virtual machines.
3. When choosing which applications or databases to place on one physical machine (using a virtual machine for each application), it is best to:
- Choose a mixture of applications/databases with different workloads (some light, some heavy)
- Keep all the heavy-workload application/databases together and all the light-workload applications/databases together
In general it is better to install a mixture of heavy-workload and light-workload applications on each physical server in order to make the best use of the server. The heavy-workload applications will benefit, in terms of performance, from being able to momentarily use a large part of the server's CPU and memory resources during traffic peaks; and the light-workload applications will effectively get a "free ride" on the server. By contrast, if you combine like with like, the risk of a server becoming overcommitted (and giving poor response times during peak loads) increases for servers hosting several heavy-workload applications. In the case of light-workload applications another risk arises: you may have to place so many applications on one server in order to fully utilize the server's resources that you end of with "too many eggs in one basket" on such a server. Mixing heavy-workload and light-workload applications avoids both of these potential problems. It should be noted, however, that some organizations steer clear of server virtualization when it comes to mission-critical applications -- preferring to put them on dedicated servers rather than having them share a server with even light-workload applications.
4. Server virtualization and the use of blade servers are:
- Technically incompatible
- Technologies that should be combined with caution to avoid putting "too many eggs in one basket"
- Technologies that should be used together whenever possible
There is nothing technically wrong or difficult about placing server-virtualization software on blade servers. However, this practice should not be pursued without careful consideration of the concentration of risk that it entails. For example, if you build ten virtual machines on each of sixteen blade servers, the total number of applications running in the blade-server shelf could be 160. If anything bad happens to the shelf (fire, power loss) and adequate backup or redundancy (outside of the shelf) does not exist, you will simultaneously lose 160 applications, with a potentially devastating impact on business.
5. Introduction of server virtualization in a data center:
- Will make the introduction of a storage area network (SAN) absolutely necessary
- Will make the introduction of a storage area network (SAN) desirable
- Will not materially change storage requirements
If you have not already established a SAN in the data center, or have not extended SAN services to the servers that you are considering as candidates for replacement by virtual machines, it is very likely that the aggregate storage demands of the applications or databases running on each virtualized physical server will exceed what can be provided on hard drives within the server. In any case, from a risk point of view, having "many eggs in one basket" will increase the importance of having the data belonging to the applications and databases running on a single physical machine mirrored to an offsite/disaster-recovery site, or at least backed up locally -- assuming the SAN already has some level of redundancy.
6. When it comes to avoiding major outages, the use of server virtualization:
- Reduces the frequency of hardware-related service outages
- Has no material impact
- Requires that levels of redundancy be increased in order to avoid an increase in outages affecting multiple applications/services
Even without the potentially worrying combination of blade servers and server-virtualization, use of server virtualization on standard servers puts several "eggs in one basket". Given that hardware failure in one server will take out, say, ten applications/databases, it is generally desirable to provide some level of redundancy, permitting the entire contents of the server to be quickly moved to a standby server if the main server fails. Fortunately, server virtualization makes it somewhat easier to move applications to redundant hardware. There are software tools, provided by server-virtualization software vendors, which help perform such migrations quickly and easily. Also, it is possible to pre-install the applications for a group of main production servers on a smaller number of standby servers, knowing that only those applications running on one main server will need to be moved to the standby server at any one time. For example, one physical standby server could contain fifty virtual machines (with an application installed in each one); and this standby server could thus act as standby for five main production machines, each containing ten virtual machines.
7. When deciding on the placement of development, test/QA, and production instances of applications and databases:
- You can, and generally should, use virtualization to put the three instances of an application/database on the same physical server, so that the development and test environments are an accurate reflection of the eventual production environment
- You should dedicate physical machines to hosting each type of instance, so that the three types are not on the same server, in order to make it easier to secure the production environment
- It really doesn’t matter where you place the different types of instance
Taking all considerations into account, it is generally better to designate physical servers as "development", "test/QA", and "production", and to place instances of applications and databases on them accordingly. This policy is driven by security and, in some industries, by regulatory considerations dictating different treatments for the different environments (particularly for production).
8. In a virtualized-server environment, compared with a traditional server environment:
- It is easier to keep track of software licensing
- Tracking software licensing is neither materially easier nor harder
- It is significantly harder to keep track of software licensing
In an "ideal" data center, it would be no harder to keep track of software licensing for virtualized servers. However, in the real world, at least at present, experience shows that it is harder. In a virtualized environment, the ease with which virtual machines can be created, combined with the difficulties of finding out from business groups exactly what software is required on, or has been installed on, each virtual machine, makes tracking license requirements and license usage significantly more difficult. See also the answer to Question 11.
9. Introduction of server virtualization in a data center:
- Will make security management easier
- Will have no material impact on the complexity of security management
- Will make security management more difficult
Experts agree that adequately securing access to, and information stored on, virtual machines presentd new challenges, over and above those in a traditional environment. To start with, access to the virtualization software (VMware or similar) must be very tightly controlled. The wrong person with access to the server at the VMware level can engage in a lot of mischief. Second, anyone with access to a virtual machine (either granted officially or acquired by nefarious means) can download an application that mounts an attack on the virtual "walls" that isolate the virtual machine from the other virtual machines. Third, it is more complex to implement access restrictions at a network level for each individual virtual machine; and so network-based security may end up being set at that of the least-sensitive application running on a physical machine (particularly if the network/firewall-management team is busy).
10. When server virtualization is introduced in a data center, the Configuration Management Database (CMDB) used to support data center operations:
- Will not need to be modified or replaced
- Can be retained, although revisions will need to be made to the naming schemes used for servers
- Will need to have its underlying database design (i.e., schema) radically redesigned and, if this is not possible with the current CMDB application, a new CMDB software platform may have to be purchased
Although numerous suppliers of software for Configuration Management Databases (CMDB) have started to embrace server virtualization, there are many older versions of CMDB products, and products that have not yet been revised by their designers, implemented in datacenters. These products may not have the necessary underlying database designs that recognize a virtual machine as a "data entity" and can represent the relationship "Virtual Machine A is on Physical Server X". Many of the things that go with a physical server (such as the version of the operating system installed on it, its IP address, and so on) must now be associated with a virtual machine -- while still allowing these things to be associated with non-virtualized servers. These requirements mean that the database design underlying a CMDB requires a major overhaul for the product to have any hope of being useful to a data center that has started to introduce virtualization. These changes will inevitably "break" the application layer of the product, which then has to be completely reprogrammed, introducing many new elements to the user interface. In summary, you will be looking at a very major CMDB upgrade, if not a replacement CMDB product.
11. In a virtualized-server environment, compared with a traditional server environment:
- The costs of software licensing tend to decrease because business groups can manage their licensing requirements more tightly
- Software licensing costs tend to remain about the same
- Software licensing costs tend to increase because business groups request far more “machines” (knowing that virtual machines are easy and cheap to add)