1.1 vmmax Cloud
The vmmax Cloud platform is an enterprise-class private cloud node system with a Type-1, Type-2, Hypervisor and a Container Engine technology that can be used to run virtual machines and containers at 98.2% with NX, 96.9% with VX, and 98.9% with CX, bare-metal performance. In some high intense I/O based operations, even faster.
The vmmax Private Cloud is a computing service offered either over the Internet or a private internal network and only to select users instead of the general public. Also called an internal or corporate cloud, private cloud computing gives businesses many of the benefits of a public cloud - including self-service, scalability, and elasticity - with the additional control and customization available from dedicated resources over a computing infrastructure hosted on-premises. In addition, vmmax private clouds deliver a higher level of security and privacy through both company firewalls and internal hosting to ensure operations and sensitive data are not accessible to third-party providers and/or services.
Benefits of vmmax Private Cloud
1.2 vmmax Nodes
The vmmax Node is an entire system server nucleus and is referred as a part of a cluster of machines. What makes vmmax nodes unique is that the machines in a cluster can be of different types. For example, a EC2 instance on AWS, a Virtual Machine on Azure or Google, a Raspberry PI device, a Zadara VM Instance, a Vmware Virtual Machine, a KVM Virtual Machine, or simply a bare-metal device. Basically, any type of device (x86, x64, ARM) that can run the Ubuntu Server operating system. This flexibility comes from the structure of the vmmax hypervisor and the container engine technology. There are three node types that we distinguish by the hypervisor:
- vmmaxVizor NX
is running the vmmaxOS operating system on bare-metal and is a Type-1 Hypervisor / Container Engine - vmmaxVizor VX
is running the embedded vmmaxOS Kernel system on top of the Ubuntu Server operating system and is a Type-2 Hypervisor / Container Engine - vmmaxVizor CX
is running the embedded vmmaxOS Kernel system on top of the Ubuntu Server operating system and is a Container Engine
The vmmax core hypervisor technology is based on the kvm open-source project with modified kvm-intel.ko, kvm-amd.ko, and kvm-arm.ko modules. Furthermore, the core container engine is based on the lxd open-source project. The vmmaxOS Kernel that runs the hypervisor and container technology is however proprietary and uses the Ubuntu Server 20.04 operating system as an installation proxy.
1.3 vmmax Control Center
vmmax Control Center is an advanced server management software that provides a centralized platform for controlling your vmmax nodes, allowing you to deliver a virtual infrastructure with confidence. The vmmax Control Center includes managements modules for App Stacks, Databases, Containers, Virtual Machines, Connect Pools, Users, Security Gateway, Auto-SSL, Security Tunneling, Snapshots, Backup, and more. The vmmax Control Center is, with its simplistic design, the most appreciated component of the vmmax Cloud platform by administrators and operators.
The vmmax Control Center is accessible through your favorite HTML5 Browser or with the free vmmax Control Center Client for Windows available on our download page. Please note that there is no difference in functionality between the two, except that the installable client uses a separate window screen for each console sessions.
1.4 vmmax Connect Client
vmmax Connect Client enables a digital workspace with the efficient delivery of virtual desktops, terminals, and applications that equips workers anywhere, anytime, and on any device. With deep integration into the vmmax connect technology, the platform offers an agile foundation, modern best-in-class management, and end to end security that empowers today’s Anyware Workspace.
vmmax Connect Client enables you to
- Enable Remote Work
Keep employees connected and productive anywhere they work and on any device with a consistent, personalized desktop environment. - Build Resiliency
Adopt a scalable, cloud-based platform that offers the resiliency needed to meet change head-on with flexible deployment options across private and public clouds. - Modernize Operations
Transform legacy infrastructure with integrated leading-edge technologies that automate the provisioning and management of virtual desktops and apps and deliver the personalization required by end-users. - Secure Data and Achieve Compliance
Ensure secure remote access to corporate resources from any device with intrinsic security built into the VMware infrastructure. Integration with Carbon Black boosts security using a zero-trust model. - Improve ROI
Achieve cost savings and realize business value with more flexible and reliable access to resources.
1.4.A vmmax Connect HTML5 Client
The vmmax Connect HTML5 Client is accessible through your favorite HTML5 Browser (Chrome, Edge, Firefox...) from any desktop and mobile (iPhone, iPad, Android) device and provides a secure connection to your virtual desktop, terminal, and application.
1.4.B vmmax Connect Windows Client
The vmmax Connect Windows Client provides a secure protocol to access your virtual desktop, terminal, and application. The Windows client provides enhanced features such as AI-Driven real-time optimizations for image quality, bandwidth, latency, and performance adjustments.
1.5 vmmax CLI Client
The vmmax CLI Client provides a secure protocol to access your vmmax Nodes directly to perform operations such as start, stop, and snapshot virtual machines and containers. It is generally used by administrators to automate certain tasks.
1.6 vmmax Live Demo and/or PoC
We offer Free Live Demo and/or Paid PoC (proof of concept) workshops. Please contact us to schedule a date and time with our vmmax consultants.
Schedule a Live Demo Now 3.1 Register a Node
Registering a cloud node to the vmmax Control Center is an easy process. Please follow the following steps to register your cloud node:
Step 1 - Preparation
To register your cloud node to the vmmax Control Center you need to have the following information:
If you have purchased a cloud node, this information will be sent to you by email or secure-dark-letter. If you have used the vmmax Cloud Platform Installer to deploy your cloud node you will see your Node ID and Node Token at the end of the installation process.
Step 2 - Register your Cloud Node
- Open the Cloud Node Dialog
- Enter the Node ID
- Enter the Node Token
- Enter the domain name or ip address of the node
- Enter the ip address of the node
- Enter a tag name for the node e.g. DC Room 1
- Select the high availability option
- Select the status option
- Click on the Save button
"FEATURE EXPLAINATION" - High Availability Option
The High Availability Option for the cloud node determines the synchronization of the vmmax Control Center database. If the node hosts a vmmax Control Center instance, then the synchronization agent is enabled or disabled otherwise. The synchronization is activated when the cluster contains at least two nodes that host the vmmax Control Center. The Primary and Replica settings determine the direction of the synchronization process. If for example, Node 1 is Primary and Node 2 is Replica, you should connect and work on the Node 1 vmmax Control Center which then replicates the database to the Replica Node 2. If for some reason your vmmax Control Center fails on the Node 1 you can then switch and connect to the vmmax Control Center on the Replica Node 2 and continue your work until you have resolved the failure on the Primary Node 1. Note: You can automate the failover process with the Load-Balancer in the Gateway Manager. - Status Option
If status is set to Maintenance the node will not be listed as an available resource in the cluster. The node will continue to operate running virtual machines and containers.
"IMPORTANT"
Node Tokens are the key-hash that enable the access and communication to a cloud node and should be kept secret and at a safe place!
3.2 Node Information
The Node Information Dialog is an important source of information about the System (Bare-Metal/Virtual), CPU, Memory, Audio, Graphics, Network, Drives, RAID, Partitions, and other hardware installed on your node.
3.3 Store Managers
There are two Store Managers, the Backup Store Manager, and the ISO Store Manager. Both have the same functionality of managing files stored in a specific location on the node. You can upload, download, and copy files to another node with the Store Manager. When you take a backup of a virual machine or a container for example, the backup file will be placed into the Backup Store.
Backup Store Manager
"FEATURE EXPLAINATION" The Backup Store Manager contain the following functionality:
- Upload Button
is used to upload backup files into the store with the Upload Dialog. - Copy Button
will open the Copy Dialog where you can choose a copy action. Copy actions include download to local computer and copy to another node in the cluster. - Delete Button
is used to delete a file from the store.
"TIP"
You can use a backup file as a template to automate the deployment of virtual machines and containers with vmmax CLI.
ISO Store Manager
"FEATURE EXPLAINATION" The ISO Store Manager contain the following functionality:
- Upload Button
is used to upload iso files in to the store with the Upload Dialog. - Copy Button
will open the Copy Dialog where you can choose a copy action. Copy actions include download to local computer and copy to another node in the cluster. - Delete Button
is used to delete a file from the store.
3.4 Gateway Manager
The Gateway Manager is used to manage firewall rules of the cloud node, reverse proxy and/or load balance and/or failover domains, and to install valid domain SSL certificates. vmmax Cloud Node deployments activate by default the firewall and allow traffic on ports 22, 80, 443, and 444 only. In an ideal operation you would never require any other ports to be opened, instead you would expose your apps over a sub domain that is secured with an SSL certificate and/or secure tunnel.
Gateway Manager
Domain Reverse Proxy
To add a reverse proxy for a domain follow the following steps: - Click on the NEW RULE button to open the Gateway Rule Dialog.
- Select in the Gateway Rule Dialog the Domain Name Service option.
- Enter the from IP:Port address. Use 0.0.0.0:Port to allow any IP to come through.
- Enter the destination IP:Port or DomainName.com:Port addfress.
- Single HTTPS Forward
Your domain, which will become SSL certified https (secure), can forward to one internal http (unsecure) port. The SSL chain will not break and your connection will stay secure.
- Load Balancer
enter multiple destination addresses separated by ; (semi-colon) to enable the load balancer
for example: 1.1.1.212:443;1.1.1.213:443;externalservice.mydomain.com:443
- Weighted Load Balancing
for example: 1.1.1.212:443 weight=3;1.1.1.213:443;externalservice.mydomain.com:443
With this configuration, every 5 new requests will be distributed across the application instances as the following: 3 requests will be directed to 1.1.1.212:443, one request will go to 1.1.1.213:443, and another one — to externalservice.mydomain.com:443. - Mixed Protocol Load Balancing
Mixed Protocol Balancing is not supported. For example: 1.1.1212:443;1.1.1.213:80 will not work because one port is https and the other http in the destinations.
- Enter a comment/tag
- Click on the Save button
The rule will be created and a valid SSL certificate (from Let's Enrypt) will be installed for that domain and set to automatically renew every three month.
"IMPORTANT"
Please make sure that your domain name resolves/points to the IP address of your node beforehand.
Firewall Rule
To add a firewall rule please follow the following steps: - Click on the NEW RULE button to open the Gateway Rule Dialog.
- Select in the Gateway Rule Dialog the TCP/UDP Firewall Service option.
- Enter the from IP:Port address. Use 0.0.0.0:Port to allow any IP to come through.
- Enter the destination IP:Port addfress.
- Select one of the rule ALLOW IN, DENY IN, DENY OUT.
- Enter a comment/tag
- Click on the Save button
The rule will be created and the port will be opened/blocked in the firewall.
"IMPORTANT"
Port Ranging is not supported. Opening ports with Firewall TCP/UDP Service should be avoided in general, instead try to expose your application/service through a Domain Service Rule.
3.5 Statistics
There Statistics dialog is used to view and analyse the performance of a cloud node. There are two statistics dialogs that can be used:
Statistics
will display a set of line charts for the CPU, Memory, and Disk usage of the current day
Statistics (Glances)
will display a terminal and run the Glances app to display Real-Time performance metrics.
"FEATURE EXPLAINATION" Here is a list of hotkeys that can be used in Glances:
- a – Sort processes automatically
- c – Sort processes by CPU%
- m – Sort processes by MEM%
- p – Sort processes by name
- i – Sort processes by I/O rate
- d – Show/hide disk I/O stats
- f – Show/hide file system statshddtemp
- n – Show/hide network stats
- s – Show/hide sensors stats
- y – Show/hide hddtemp stats
- l – Show/hide logs
- b – Bytes or bits for network IO
- w – Delete warning logs
- x – Delete warning and critical logs
- x – Delete warning and critical logs
- 1 – Global CPU or per-CPU stats
- h – Show/hide this help screen
- t – View network I/O as combination
- u – View cumulative network I/O
q – Quit (Esc and Ctrl-C also work)
3.6 Update/Reboot
The Update/Reboot Dialog is used to trigger the update and/or reboot process on a cloud node. The following options are available in the Update/Reboot Dialog:
- Reboot Node
will reboot the cloud node without applying any updates. - Install Updates
will install security updates and updates for the vmmax Control Center - Install Updates and Reboot Node
will install security updates and updates for the vmmax Control Center and reboot the cloud node
4.1 Create App Stacks
Creating an App Stacks is easy, just follow the following steps:
- Click on App Stacks in the left navigator
- Click on the Actions menu (three dots)
- Click on Create App Stack
- Enter a name for the App Stack
- Select the node you want to create the App Stack on
- Select your App Stack solution
- Click on the Create button and wait 1-3 minutes and your newly created App Stack will be listed.
"TIP"
After you created an App Stack, open the console and read the /root/config.readme file for more details.
4.2 Compose App Stacks
The Compose App Stack Dialog is used to define and create custom App Stacks with a yaml definition.
Example Yaml Definition for WordPress
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- wordpress:/var/www/html
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
volumes:
wordpress:
db:
To compose an App Stack please follow the following step:
- Click on App Stacks in the left navigator
- Click on the Actions menu (three dots)
- Click on Compose App Stack
- Enter a name for the App Stack
- Select the node you want to create the App Stack on
- Enter/Paste your yaml definition code
- Click on the Create button and wait 1-3 minutes and your newly created App Stack will be listed.
4.3 Clone App Stacks
Cloning an App Stacks is easy, just follow the following steps:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to clone
- Click on the Actions menu (three dots)
- Click on Clone App Stack
- Enter a name for the new App Stack clone
- Click on the Clone button and wait 1-3 minutes and your newly cloned App Stack will be listed.
4.4 Migrate App Stacks
Migrating an App Stacks is easy, just follow the following steps:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to migrate
- Click on the Actions menu (three dots)
- Click on Migrate App Stack
- Select the destination node
- Select one of the migration modes MOVE/CLONE
- Click on the Migrate button and wait 1-3 minutes and your migrated App Stack will be listed.
4.5 Start/Stop App Stacks
Starting, Restarting, and Stopping an App Stack is self explanatory, just select the App Stack and issue the commands from the Actions menu (three dots).
4.6 Limit App Stacks
By default, App Stacks have no limitation on the resources available on the node. They scale their resource usage automatically to the maximum possible and release them again when not required anymore. For example, an App Stack uses 1 CPU core at its base and scales up to 4 CPU cores automatically when a higher workload is required at peak time, after the peak time, the App Stack will again scale down and use 1 CPU core. However, sometimes the default auto scale option can interrupt other processes if the node is overprovisioned. You can control the automatic scaling by setting limits to resources.
To limit resources of an App Stack, follow the following steps:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to limit
- Click on the Actions menu (three dots)
- Click on Limit App Stack
- Select your limitations in the Limit Dialog
- Click on the Save button to apply the limits to the App Stack.
4.7 Snapshot Manager
A snapshot preserves the system and data of an App Stack at a specific point in time. The Snapshot Manager is used to take snapshots of an App Stack, restore an App Stack from a snapshot, and delete snapshots that are not of value anymore.
Please follow the following steps to open the Snapshot Manager for an App Stack:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to snapshot
- Click on the Actions menu (three dots)
- Click on Snapshot Manager
- In the Snapshot Manager Dialog:
- To take a snapshot:
- Enter a name for the snapshot
- Click on the Take button
- To restore from a snapshot:
- Select the snapshot you want to restore
- Click on the Restore button
- To delete a snapshot:
- Select the snapshot you want to delete
- Click on the Delete button
4.8 Backup Manager
A backup preserves the system, data, and container configuration of an App Stack at a specific point in time. The difference from a snapshot is, that a backup operation can't be done in a running state of the App Stack, and it encapsulates the entire system, data, and configurations. The Backup Manager is used to take backups of an App Stack, restore an App Stack from a backup file, and delete backup files that are not of value anymore.
Please follow the following steps to open the Backup Manager for an App Stack:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to backup
- Stop your App Stack if it is in a running state
- Click on the Actions menu (three dots)
- Click on Backup Manager
- In the Backup Manager Dialog:
- To take a backup:
- Enter a name for the backup
- Click on the Take button
- To restore from a backup:
- Select the backup you want to restore
- Click on the Restore button
- To delete a backup file:
- Select the backup you want to delete
- Click on the Delete button
4.9 Open Console
You can enter the console of an App Stack by starting a Console Session. Use the following steps to start a Console Session:
- Click on App Stacks in the left navigator
- Select the App Stack that you want to start a console session
- Click on the Actions menu (three dots)
- Click on Open Console
5.1 Create Databases
Creating a Database is easy, just follow the following steps:
- Click on Databases in the left navigator
- Click on the Actions menu (three dots)
- Click on Create Database
- Enter a name for the Database
- Enter an admin user password for the database
- Select the node you want to create the Database on
- Select your Database solution
- Click on the Create button and wait 1-3 minutes and your newly created Database will be listed.
5.2 Clone Databases
Cloning a Database is easy, just follow the following steps:
- Click on Databases in the left navigator
- Select the Database that you want to clone
- Click on the Actions menu (three dots)
- Click on Clone Database
- Enter a name for the new Database clone
- Click on the Clone button and wait 1-3 minutes and your newly cloned Database will be listed.
5.3 Migrate Databases
Migrating a Database is easy, just follow the following steps:
- Click on Databases in the left navigator
- Select the Database that you want to migrate
- Click on the Actions menu (three dots)
- Click on Migrate Database
- Select the destination node
- Select one of the migration modes MOVE/CLONE
- Click on the Migrate button and wait 1-3 minutes and your migrated Database will be listed.
5.4 Start/Stop Databases
Starting, Restarting, and Stopping a Database is self explanatory, just select the Database and issue the commands from the Actions menu (three dots).
5.5 Limit Databases
By default, Databases have no limitation on the resources available on the node. They scale their resource usage automatically to the maximum possible and release them again when not required anymore. For example, a Database uses 1 CPU core at its base and scales up to 4 CPU cores automatically when a higher workload is required at peak time, after the peak time, the Database will again scale down and use 1 CPU core. However, sometimes the default auto scale option can interrupt other processes if the node is overprovisioned. You can control the automatic scaling by setting limits to resources.
To limit resources of a Database, follow the following steps:
- Click on Databases in the left navigator
- Select the Database that you want to limit
- Click on the Actions menu (three dots)
- Click on Limit Database
- Select your limitations in the Limit Dialog
- Click on the Save button to apply the limits to the Database.
5.6 Snapshot Manager
A snapshot preserves the system and data of a Database at a specific point in time. The Snapshot Manager is used to take snapshots of a Database, restore a Database from a snapshot, and delete snapshots that are not of value anymore.
Please follow the following steps to open the Snapshot Manager for a Database:
- Click on Databases in the left navigator
- Select the Database that you want to snapshot
- Click on the Actions menu (three dots)
- Click on Snapshot Manager
- In the Snapshot Manager Dialog:
- To take a snapshot:
- Enter a name for the snapshot
- Click on the Take button
- To restore from a snapshot:
- Select the snapshot you want to restore
- Click on the Restore button
- To delete a snapshot:
- Select the snapshot you want to delete
- Click on the Delete button
5.7 Backup Manager
A backup preserves the system, data, and container configuration of a Database at a specific point in time. The difference from a snapshot is, that a backup operation can't be done in a running state of the Database, and it encapsulates the entire system, data, and configurations. The Backup Manager is used to take backups of a Database, restore a Database from a backup file, and delete backup files that are not of value anymore.
Please follow the following steps to open the Backup Manager for a Database:
- Click on Databases in the left navigator
- Select the Database that you want to backup
- Stop your Database if it is in a running state
- Click on the Actions menu (three dots)
- Click on Backup Manager
- In the Backup Manager Dialog:
- To take a backup:
- Enter a name for the backup
- Click on the Take button
- To restore from a backup:
- Select the backup you want to restore
- Click on the Restore button
- To delete a backup file:
- Select the backup you want to delete
- Click on the Delete button
5.8 Open Console
You can enter the console of a Database by starting a Console Session. Use the following steps to start a Console Session:
- Click on Databases in the left navigator
- Select the Database that you want to start a console session
- Click on the Actions menu (three dots)
- Click on Open Console
6.1 Create Containers
Creating a Container is easy, just follow the following steps:
- Click on Containers in the left navigator
- Click on the Actions menu (three dots)
- Click on Create Container
- Enter a name for the Container
- Select the node you want to create the Container on
- Select your container operating system
- Click on the Create button and wait 1-3 minutes and your newly created Container will be listed.
6.2 Clone Containers
Cloning a Container is easy, just follow the following steps:
- Click on Containers in the left navigator
- Select the Container that you want to clone
- Click on the Actions menu (three dots)
- Click on Clone Container
- Enter a name for the new Container clone
- Click on the Clone button and wait 1-3 minutes and your newly cloned Container will be listed.
6.3 Migrate Containers
Migrating a Container is easy, just follow the following steps:
- Click on Containers in the left navigator
- Select the Container that you want to migrate
- Click on the Actions menu (three dots)
- Click on Migrate Container
- Select the destination node
- Select one of the migration modes MOVE/CLONE
- Click on the Migrate button and wait 1-3 minutes and your migrated Container will be listed.
6.4 Start/Stop Containers
Starting, Restarting, and Stopping a Container is self explanatory, just select the Container and issue the commands from the Actions menu (three dots).
6.5 Limit Containers
By default, Containers have no limitation on the resources available on the node. They scale their resource usage automatically to the maximum possible and release them again when not required anymore. For example, a Container uses 1 CPU core at its base and scales up to 4 CPU cores automatically when a higher workload is required at peak time, after the peak time, the Container will again scale down and use 1 CPU core. However, sometimes the default auto scale option can interrupt other processes if the node is overprovisioned. You can control the automatic scaling by setting limits to resources.
To limit resources of a Container, follow the following steps:
- Click on Containers in the left navigator
- Select the Container that you want to limit
- Click on the Actions menu (three dots)
- Click on Limit Container
- Select your limitations in the Limit Dialog
- Click on the Save button to apply the limits to the Container.
6.6 Snapshot Manager
A snapshot preserves the system and data of a Container at a specific point in time. The Snapshot Manager is used to take snapshots of a Container, restore a Container from a snapshot, and delete snapshots that are not of value anymore.
Please follow the following steps to open the Snapshot Manager for a Container:
- Click on Containers in the left navigator
- Select the Container that you want to snapshot
- Click on the Actions menu (three dots)
- Click on Snapshot Manager
- In the Snapshot Manager Dialog:
- To take a snapshot:
- Enter a name for the snapshot
- Click on the Take button
- To restore from a snapshot:
- Select the snapshot you want to restore
- Click on the Restore button
- To delete a snapshot:
- Select the snapshot you want to delete
- Click on the Delete button
6.7 Backup Manager
A backup preserves the system, data, and container configuration of a Container at a specific point in time. The difference from a snapshot is, that a backup operation can't be done in a running state of the Container, and it encapsulates the entire system, data, and configurations. The Backup Manager is used to take backups of a Container, restore a Container from a backup file, and delete backup files that are not of value anymore.
Please follow the following steps to open the Backup Manager for a Container:
- Click on Containers in the left navigator
- Select the Container that you want to backup
- Stop your Container if it is in a running state
- Click on the Actions menu (three dots)
- Click on Backup Manager
- In the Backup Manager Dialog:
- To take a backup:
- Enter a name for the backup
- Click on the Take button
- To restore from a backup:
- Select the backup you want to restore
- Click on the Restore button
- To delete a backup file:
- Select the backup you want to delete
- Click on the Delete button
6.8 Open Console
You can enter the console of a Container by starting a Console Session. Use the following steps to start a Console Session:
- Click on Containers in the left navigator
- Select the Container that you want to start a console session
- Click on the Actions menu (three dots)
- Click on Open Console
7.1 Create Virtual Machines
Creating a Virtual Machine is easy, just follow the following steps:
- Click on Virtual Machines in the left navigator
- Click on the Actions menu (three dots)
- Click on Create Virtual Machine
- Enter a name for the Virtual Machine
- Select the node you want to deploy the Virtual Machine on
- Select your socket, cores, and threads amount for CPU
- Slide the RAM selector to the amount you require
- Slide the Disk selector to amount you require
- Click on the arrow in the dialog in der upper right corner to go to the next page
- Mount CD-Rom with the operating system you want to install
- Click on the Create button and wait 1-3 minutes and your newly created Virtual Machine will be listed.
- Open Console to finish your installation
- When installation is done, shutdown the Virtual Machine
- In the Actions menu select Edit Virtual Machine
- Remove the CD-Rom and click on the Save button
- Start your newly installed Virtual Machine
7.2 Clone Virtual Machines
Clones of a Virtual Machine can be full or linked depending on the amount of data copied from the source to the destination machine. A full clone is an independent copy of a Virtual Machine that shares nothing with the parent Virtual Machine after the cloning operation. Ongoing operation of a full clone is separate from the parent Virtual Machine. Full clones take longer to create than linked clones. Creating a full clone can take several hours or even days if the files involved are large. A linked clone is a copy of a Virtual Machine that shares virtual disks with the parent Virtual Machine in an ongoing manner. A linked clone is a fast way to convert and run a new Virtual Machine. You can create a linked clone from the current state of a powered off Virtual Machine. This practice conserves disk space and lets multiple Virtual Machines use the same software installation. All files available on the source machine at the moment of the clone continue to remain available to the linked clone. Ongoing changes to the virtual disk of the parent do not affect the linked clone, and changes to the disk of the linked clone do not affect the source machine.
Cloning a Virtual Machine is easy, just follow the following steps:
- Click on Virtual Machines in the left navigator
- Select the Virtual Machine that you want to clone
- Click on the Actions menu (three dots)
- Click on Clone Virtual Machine
- Enter a name for the new Virtual Machine clone
- Select optinally Linked Clone
- Click on the Clone button and wait 1-3 minutes and your newly cloned Virtual Machine will be listed.
7.3 Migrate Virtual Machines
Migrating a Virtual Machine is easy, just follow the following steps:
- Click on Virtual Machines in the left navigator
- Select the Virtual Machine that you want to migrate
- Click on the Actions menu (three dots)
- Click on Migrate Virtual Machine
- Select the destination node
- Select one of the migration modes MOVE/CLONE
- Click on the Migrate button and wait 1-3 minutes and your migrated Virtual Machine will be listed.
7.4 Start/Stop Virtual Machines
Starting, Restarting, and Stopping a Virtual Machine is self explanatory, just select the Virtual Machine and issue the commands from the Actions menu (three dots).
7.5 Snapshot Manager
A snapshot preserves the system, data and state of a Virtual Machine at a specific point in time. The state includes the Virtual Machine's power state (for example, RUNNING, STOPPED). The Snapshot Manager is used to take snapshots of a Virtual Machine, restore a Virtual Machine from a snapshot, and delete snapshots that are not of value anymore.
Please follow the following steps to open the Snapshot Manager for a Virtual Machine:
- Click on Virtual Machines in the left navigator
- Select the Virtual Machine that you want to snapshot
- Click on the Actions menu (three dots)
- Click on Snapshot Manager
- In the Snapshot Manager Dialog:
- To take a snapshot:
- Enter a name for the snapshot
- Click on the Take button
- To restore from a snapshot:
- Select the snapshot you want to restore
- Click on the Restore button
- To delete a snapshot:
- Select the snapshot you want to delete
- Click on the Delete button
7.6 Backup Manager
A backup preserves the system, data, and Virtual Machine configuration of a Virtual Machine at a specific point in time. The difference from a snapshot is, that a backup operation can't be done in a running state of the Virtual Machine, and it encapsulates the entire system, data, and configurations. The Backup Manager is used to take backups of a Virtual Machine, restore a Virtual Machine from a backup file, and delete backup files that are not of value anymore.
Please follow the following steps to open the Backup Manager for a Virtual Machine:
- Click on Virtual Machines in the left navigator
- Select the Virtual Machine that you want to backup
- Stop your Virtual Machine if it is in a running state
- Click on the Actions menu (three dots)
- Click on Backup Manager
- In the Backup Manager Dialog:
- To take a backup:
- Enter a name for the backup
- Click on the Take button
- To restore from a backup:
- Select the backup you want to restore
- Click on the Restore button
- To delete a backup file:
- Select the backup you want to delete
- Click on the Delete button
7.7 Open Console
You can enter the console of a Virtual Machine by starting a Console Session. Use the following steps to start a Console Session:
- Click on Virtual Machines in the left navigator
- Select the Virtual Machine that you want to start a console session
- Click on the Actions menu (three dots)
- Click on Open Console
8.1 Automatic Pools
Automatic Pool are Connect Pools that setup the instance for a user automatically at first login. For example, a master Windows image is created, and all applications are installed and/or updated. Users granted access to the Automatic Pool's Inventory are then eligible to connect and start a session. At the first login of a user, the Automatic Pool will create a linked clone of the master image and assign it to the user. If the Auto-Login option is enabled, the Single-Sign-On service will additionally authenticate the user automatically on the new Windows instance.
8.2 Manual Pools
Manual Pools are Connect Pools that are static. The operator has to assign a existing virtual machine to a user to enable the access in the Manual Pools' Inventory. For example, a master Windows image is created, and all applications are installed and/or updated. The operator then must clone manually as many Full/Linked clones as needed to then assign each cloned virtual machine to a user in the Manual Pool's Inventory. The user can start a connect session after the manual assignment has been done. If the Auto-Login option is enabled, the Single-Sign-On service will additionally authenticate the user automatically on the new Windows instance.
8.3 Terminal Pools
Terminal Pools are Connect Pools that are multiuser and static. The operator must assign an existing virtual machine to multiple users to enable the access in the Terminal Pool's Inventory. For example, a Linux Server image is created, and all applications are installed and/or updated. The operator then assigns multiple users to that Linux Server image in the Terminal Pool's Inventory. The user can start a connect session after the manual assignment has been done.
8.4 Create Pools
The Connect Pool Dialog is used to create client connect pools. Connect Pools are used to allow users to connect securely to a virtual machine or container. Connect Pools require the gateway port 444 to establish a ssl encrypted ssh tunnel between the client and the virtual machine or container.
"IMPORTANT"
Please make sure that the client host machines have a firewall exception for ports TCP/UDP 443 and 444 as IN/OUT bound traffic configured. This firewall configuration is a requirement for Browser Client and vmmax Client software!
The account management of a Connect Pool can be either Cloud Managed or Domain Managed. Following rules apply: - Cloud Controller
Access and System User Accounts are managed in the vmmax Control Center. - Domain Controller
Access User Accounts are managed in the vmmax Control Center, System User Accounts are managed in the Domain Controller and Windows instanced are joined to the domain automatically in an Automatic Pool. Please prepare a KeyPass for with the Domain Admin information beforehand.
Creating a Connect Pool is easy, just follow the following steps: - Click on Connect Pools in the left navigator
- Click on the Actions menu (three dots)
- Click on Create Connect Pool
- Enter a name for the Connect Pool
- Select the pool type
- Select your pool account management
- Select your primary node
- Select fail-safe node
- Select your pool status RUNNING/MAINTANCE
- Click on the arrow in the upper right corner to view the options page
- Click on the Save button and your newly created Connect Pool will be listed.
"FEATURE EXPLAINATION" - Connect Pool Options
- Auto-Login if enabled, the Cloud Account Password and the Instance Account Password will be synchronized for a single sign on experience. If not enabled, the authentication manager will always prompt for an instance account sign in.
- Drive Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the local drives of the host client machine will be passed through to the virtual machine.
- Printer Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the local printer of the host client machine will be passed through to the virtual machine.
- Clipboard Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the host client machine and the virtual machine can exchange copy paste operations.
- USB Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the usb ports of the host client machine will be passed through to the virtual machine.
- Smartcard Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the smartcard ports of the host client machine will be passed through to the virtual machine.
- Serial Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the serial ports of the host client machine will be passed through to the virtual machine.
- DirectX Pass is only valid for windows virtual machine sessions with the vmmax client software. If enabled, the graphic performance will be shifted from the cloud node to the virtual machines CPU/GPU load.
- Audio Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the audio will be decoded on the host client machine instead of the virtual machine.
- Video Pass is only valid for virtual machine sessions with the vmmax client software. If enabled, the video will be decoded on the host client machine instead of the virtual machine.
8.5 KeyPass
The KeyPass Dialog is used to store credentials in a vault. KeyPass is required in Connect Pools to authenticate Admin Accounts during the configuration of a clone operation.
Creating a KeyPass is easy, just follow the following steps:
- Click on KeyPass in the left navigator
- Click on the Actions menu (three dots)
- Click on Create KeyPass
- Enter a name for the KeyPass
- Enter the username
- Enter the password
- Confirm the password
- Click on the Save button and your newly created KeyPass will be listed.
8.6 Setup Inventory
The Setup Inventory Dialog is used to add/remove access permissions from the Connect Pool. Each pool type requires a different set of Instance to User or User to Instance combination that links a user to an instance of a Virtual Machine or Container.
To open the Setup Inventory Dialog, just follow the following steps:
- Click on Connect Pools in the left navigator
- Select the pool you want to modify
- Click on the Actions menu (three dots)
- Click on Setup Inventory
- Add/Remove your entitlements
- Click on the Save button to apply the changes
8.7 Connect Client
Users can access the Connect Pool either through their favorite Internet Browser or the vmmax Connect Client for Windows software.
Connecting with a Internet Browser
Connecting with vmmax Connect Client
9.1 Create Users
Creating a User is easy, just follow the following steps:
- Click on Users in the left navigator
- Click on the Actions menu (three dots)
- Click on Create User
- Enter the user email address (Access Account)
- Enter the system username (System Account)
- Enter the full name of the user
- Enter a password for the user
- Confirm the password for the user
- Select the access group of the user
- Operator - can manage App Stacks, Databases, Virtual Machines, Containers, and Connect Pools
- User - can access Connect Pools
- Optionally, but highly recommended, enable Two Factor Authentication
- Click on the Create button and your newly created User will be listed.
9.2 Access Groups
The vmmax Cloud Platform utilizes a very simple grouping of access permissions that we can explain with the following:
- Administrator
admin@system is the super user in the Administrator groups which is created during the installation/deployment of the vmmax Control Center. There is only one Administrator account which has full access to every module in the vmmax Control Center. - Operator
is the Operator/Developer role which can manage only App Stacks, Databases, Containers, Virtual Machines, and Connect Pools - User
is the end-user role which can only connect and access Connect Pools
9.3 2F Authentication
2F Authentication is an extra layer of security used to make sure that people trying to gain access to a vmmax account are who they say they are. First, a user will enter their username and a password. Then, instead of immediately gaining access, they will be required to provide a pin code from their Authenricator App on their smartphone. It is very easy to setup and we highly recommend to use 2F Authentication. vmmax 2F Authentication can be registered with your favorite Authenricator App like Microsoft Authenricator App, or our recommendation the 2FAS App, or any other Authenricator App that has support for TOTP (time-based one-time password).
To enabled 2F Authentication, please follow the following steps:
Administrator Steps
- In the Users module, select the user you want to edit
- Click on the Actions menu (three dots)
- Click on Edit User
- Enable Two Factor Authentication
- Click on the Save button to apply the changes
- Ask the user to sign out and back in again
User Steps
- In the login screen, enter your email address and password
- Click on the Sign In button
- Open your favorite Authenticator App on your smartphone
- Scan the QR-Code in the login screen (you have to do this step only once)
- If you can see your vmmax Cloud account in the Authenticator App, click on the Continue button
- Enter the pin code presented in the Authenticator App
- Click on the Sign In button
10.1 Settings
The Settings Dialog is only available in the vmmax Connect Client for Windows that you can use to configure following client settings:
- Performance/Quality Mode
suggest a model for the AI-Engine to use for optimizations: - Performance - quality is completely ignored and only performance optimizations are calculated (recommended for WAN connections)
- Fast - quality is partly ignored and mostly performance optimizations are calculated with additional image quality calculations (recommended for WAN connections)
- Auto Adjust - (recommended) quality and performance are fine grain tuned and calculated for best user experience (best for LAN/WAN connections)
- Quality - more image quality calculations are performed to increase picture quality (recommended for LAN connections)
- High Quality - performance calculations are ignored and image quality calculations are performed for best picture quality (recommended for LAN connections)
- Color Map - defines the maximum Color Map Bits
- Allow H.264 Encoding - uses the H.264 Encoder to resolve image quality
- Enable Local GPU - enables the local GPU to assist, if available
- Auto-Adjust to Bandwidth - allow the AI-Engine for adjust the image quality according to the bandwidth available
- Enable Compression - enables the compressed image transfer protocol
- Enable Persistent Cache - enables to create a cache volume on the local host machine
- Proxy Server - Enter here your https://proxyserver:port address
10.2 Launcher
Both vmmax Connect Client editions have the LAUNCH card for each instance that was defined in the Connect Pool for the user.
Launching a Desktop in the Internet Browser
Launching a Desktop with vmmax Connect Client for Windows
11.1 Interactive
The vmmax CLI Client has a set of commands that can be executed in the console app. To interact with a cloud node follow the following steps:
- Start the vmmax CLI console application
- Enter the following command to print the help information
help
- First you need to connect to a cloud node. Enter the following command followed by the Node Token of the cloud node you want to connect to
connect NODETOKEN
- After you see the OK message, you are connected and can start executing other commands
- To disconnect and exit, enter the following command
exit
11.2 Scripts
You can automate with cron or a Scheduler software, tasks by entering commands to a text file and run it as a batch.
Below is an example script:
connect NODETOKEN
vm take snapshot masterMachine snap01
vm clone full masterMachine clonedMachine
vm start clonedMachine
exit
What happens in the above script?
- We connect to a cloud node
- We take a snapshot of the vm masterMachine and name the snapshot snap01
- We clone the vm masterMachine and name the new machine clonedMachine
- We start the newly cloned machine named clonedMachine
- We disconnect and exit
To execute a script with vmmax CLI Client follow the following steps: - Create a script file and save it on your disk
- Now open your system console/terminal and change your directory to the location of vmmaxCLI executable
- Enter the following command to execute the script
vmmaxCLI /path/to/your/script.file
12.1 Load Balancer
What makes a vmmax Cloud Node very secure and private is the fact that each vmmax Cloud Node has its own Security Gateway with a built in Load Balancer. First let's examine the features of the vmmax Security Gateway's built in Load Balancer:
- Least Busy (default)
keeps track of connections and diverts a new connection to the least busy destination unless the destinations are weighted. The Least Busy algorithm is the default for the Load Balancer. - Weighted Balancing
keeps track of destinations specified limit (weight) and diverts a new connection to the next weighted or least busy destination. - HTTP/HTTPS
automatically adjusts the protocol and certificate chain, HTTPS -> HTTP, HTTPS -> HTTPS, HTTP -> HTTPS, the connection always stays secure and encrypted. - Session Management
automatically caches and diverts user sessions and cookie chains to the correct destination. No Session is lost. - Performance Booster
automatic in memory caching for repeat connections that are identical.
There are three balancing models that can be applied here. - Node to Node Model
The Node to Node Model is very effective for workload balancing and redundancy of services.
Node 1
|
--------------------------
| |
Node 2 Node 3
- Node to Apps Model
The Node to Apps Model is very effective for workload balancing, automatic scaling of services, and failover.
Node
|
--------------------------
| |
App 1 App 2
- Node to Node to Apps Model
The Node to Node to Apps Model is the mix of the first two models and is widely used to implement a full-scale redundant environment.
Node 1
|
--------------------------
| |
Node 2 Node 3
|
--------------------------
| |
App 1 App 2
Example Implementation
In the example below, we will demonstrate a real-life Load Balancer setup that is used by us and our customers.
Our objective is: - load balance and fail-safe the vmmax Control Center
- load balance and fail-safe a website
Inventory Primary Node - yourdomain.com | This is the main node that is in the DMZ and faces the Internet |
VM - vmmax Cloud Center (1.1.2.2) | This is the vmmax Control Center on the Primary Node |
Replica Node 1 - node1.yourdomain.com | This is the replica node that is in the LAN |
VM - vmmax Cloud Center (1.1.2.2) | This is the vmmax Control Center on the Internal Node 1 |
VM - Web Server 1 (1.1.1.10) | This is the Web Server 1 running on Internal Node 1 |
VM - Web Server 2 (1.1.1.20) | This is the Web Server 1 running on Internal Node 2 |
Replica Node 2 - node2.yourdomain.com | This is the replica node that is in the LAN |
VM - vmmax Cloud Center (1.1.2.2) | This is the vmmax Control Center on the Internal Node 2 |
VM - Web Server 1 (1.1.1.10) | This is the Web Server 1 running on Internal Node 2 |
VM - Web Server 2 (1.1.1.20) | This is the Web Server 2 running on Internal Node 2 |
vmmax Security Gateway Rules
Step 1
Our first objective is to load balance and fail-safe vmmax Control Center.
The following rule can be added: - Open the Gateway Manager of the Primary Node
- Click on the NEW RULE button
- Select Domain Name Services
- Enter
cc.yourdomain.com
as the domain name - Enter
1.1.1.2:80;node1.yourdomain.com:80;node2.yourdomain.com:80
in the destination field - Enter
vmmax CC
in the comment field - Click on the Save button
Step 2
Our second objective is to load balance and fail-safe a website.
The following rule can be added: - Open the Gateway Manager of the Primary Node
- Click on the NEW RULE button
- Select Domain Name Services
- Enter
yourdomain.com
as the domain name - Enter
web1.yourdomain.com:8080;web2.yourdomain.com:8080
in the destination field - Enter
Web Server
in the comment field - Click on the Save button
At this point we have setup a load balance and fail-safe rule to the vmmax Control Center and a website. But we need to load balance the web server 1 and 2 on Replica Node 1 and 2
Step 3
On Replica Node 1: - Open the Gateway Manager of the Replica Node 1
- Click on the NEW RULE button
- Select Domain Name Service
- Enter
web1.yourdomain.com
as the domain name - Enter
1.1.1.10:8080;1.1.1.20:8080
in the destination field - Enter
Web Server 1
in the comment field - Click on the Save button
Step 4
And finally on Replica Node 2: - Open the Gateway Manager of the Replica Node 2
- Click on the NEW RULE button
- Select Domain Name Service
- Enter
web2.yourdomain.com
as the domain name - Enter
1.1.1.10:8080;1.1.1.20:8080
in the destination field - Enter
Web Server 2
in the comment field - Click on the Save button
"IMPORTANT"
Your external and internal domain names must resolve before you can setup above rules. Please also consider using Round-Robin A-Records/DNS Entries for domain names.
12.2 High Availability
Running server operations using clusters of either physical or virtual computers is all about improving both reliability and performance over and above what you could expect from a single, high-powered server. You add reliability by avoiding hanging your entire infrastructure on a single point of failure (i.e., a single server). And you can increase performance through the ability to very quickly add computing power and capacity by scaling up and out.
This might happen through intelligently spreading your workloads among diverse geographic and demand environments (load balancing), providing backup servers that can be quickly brought into service in the event a working node fails (failover), optimizing the way your data tier is deployed, or allowing for fault tolerance through loosely coupled architectures.
We’ll get to all that. First, though, here are some basic definitions:
Node: A single machine (either physical or virtual) running server operations independently on its own operating system. Since any single node can fail, meeting availability goals requires that multiple nodes operate as part of a cluster.
Cluster: Two or more server nodes running in coordination with each other to complete individual tasks as part of a larger service, where mutual awareness allows one or more nodes to compensate for the loss of another.
Server failure: The inability of a server node to respond adequately to client requests. This could be due to a complete crash, connectivity problems, or because it has been overwhelmed by high demand.
Failover: The way a cluster tries to accommodate the needs of clients orphaned by the failure of a single server node by launching or redirecting other nodes to fill a service gap.
Failback: The restoration of responsibilities to a server node as it recovers from a failure.
Replication: The creation of copies of critical data stores to permit reliable synchronous access from multiple server nodes or clients and to ensure they will survive disasters. Replication is also used to enable reliable load balancing.
Redundancy: The provisioning of multiple identical physical or virtual server nodes of which any one can adopt the orphaned clients of another one that fails.
Split brain: An error state in which network communication between nodes or shared storage has somehow broken down and multiple individual nodes, each believing it’s the only node still active, continue to access and update a common data source. While this doesn’t impact shared-nothing designs, it can lead to client errors and data corruption within shared clusters.
Fencing: To prevent split brain, the stonithd daemon can be configured to automatically shut down a malfunctioning node or to impose a virtual fence between it and the data resources of the rest of a cluster. As long as there is a chance that the node could still be active, but is not properly coordinating with the rest of the cluster, it will remain behind the fence. Stonith stands for “Shoot the other node in the head”. Really.
Quorum: You can configure fencing (or forced shutdown) to be imposed on nodes that have fallen out of contact with each other or with some shared resource. Quorum is often defined as more than half of all the nodes on the total cluster. Using such defined configurations, you avoid having two subclusters of nodes, each believing the other to be malfunctioning, attempting to knock the other one out.
Disaster Recovery: Your infrastructure can hardly be considered highly available if you’ve got no automated backup system in place along with an integrated and tested disaster recovery plan. Your plan will need to account for the redeployment of each of the servers in your cluster.
Active/Passive Cluster
The idea behind service failover is that the sudden loss of any one node in a service cluster would quickly be made up by another node taking its place. For this to work, the IP address is automatically moved to the standby node in the event of a failover. Alternatively, network routing tools like load balancers can be used to redirect traffic away from failed nodes. The precise way failover happens depends on the way you have configured your nodes.
Only one node will initially be configured to serve clients, and will continue to do so alone until it somehow fails. The responsibility for existing and new clients will then shift (i.e., “failover”) to the passive — or backup — node that until now has been kept passively in reserve. Applying the model to multiple servers or server room components (like power supplies), n+1 redundancy provides just enough resources for the current demand plus one more unit to cover for a failure.
Active/Active Cluster
A cluster using an active/active design will have two or more identically configured nodes independently serving clients.
Should one node fail, its clients will automatically connect with the second node and, as far as resources permit, receive full resource access.
Once the first node recovers or is replaced, clients will once again be split between both server nodes.
The primary advantage of running active/active clusters lies in the ability to efficiently balance a workload between nodes and even networks. The load balancer — which directs all requests from clients to available servers — is configured to monitor node and network activity and use some predetermined algorithm to route traffic to those nodes best able to handle it. Routing policies might follow a round-robin pattern, where client requests are simply alternated between available nodes, or by a preset weight where one node is favored over another by some ratio.
Having a passive node acting as a stand-by replacement for its partner in an active/passive cluster configuration provides significant built-in redundancy. If your operation absolutely requires uninterrupted service and seamless failover transitions, then some variation of an active/passive architecture should be your goal.
Shared-Nothing vs. Shared-Disk Clusters
One of the guiding principles of distributed computing is to avoid having your operation rely on any single point of failure. That is, every resource should be either actively replicated (redundant) or independently replaceable (failover), and there should be no single element whose failure could bring down your whole service.
Now, imagine that you’re running a few dozen nodes that all rely on a single database server for their function. Even though the failure of any number of the nodes will not affect the continued health of those nodes that remain, should the database go down, the entire cluster would become useless. Nodes in a shared-nothing cluster, however, will (usually) maintain their own databases so that — assuming they’re being properly synced and configured for ongoing transaction safety — no external failure will impact them.
This will have a more significant impact on a load balanced cluster, as each load balanced node has a constant and critical need for simultaneous access to the data. The passive node on a simple failover system, however, might be able to survive some time without access.
While such a setup might slow down the way the cluster responds to some requests — partly because fears of split-brain failures might require periodic fencing through stonith — the trade off can be justified for mission critical deployments where reliability is the primary consideration.
Availability
When designing your cluster, you’ll need to have a pretty good sense of just how tolerant you can be of failure. Or, in other words, given the needs of the people or machines consuming your services, how long can a service disruption last before the mob comes pouring through your front gates with pitch forks and flaming torches. It’s important to know this, because the amount of redundancy you build into your design will have an enormous impact on the downtimes you will eventually face.
Obviously, the system you build for a service that can go down for a weekend without anyone noticing will be very different from an e-commerce site whose customers expect 24/7 access. At the very least, you should generally aim for an availability average of at least 99% — with some operations requiring significantly higher real-world results. 99% up time would translate to a loss of less than a total of four days out of every year.
There is a relatively simple formula you can use to build a useful estimate of Availability (A). The idea is to divide the Mean Time Before Failure by the Mean Time Before Failure plus Mean Time To Repair. A = MTBF / (MTBF + MTTR)
The closer the value of A comes to 1, the more highly available your cluster will be.
Example Implementation
Now that we know almost everything about High Availability, let's implement an Active/Active Cluster with No Single Point of Failure.
What is our objective?
We have 8 CAD Workstations with the following specs: 8 CPU Processor, 24 GB of RAM, with a disk size of 250 GB, that external engineers use to deliver CAD Designs for production. The customer wants these 8 machines to be high available since the engineers are located globally and some of them also work in the weekends. Our objective is to design a Active/Active Cluster that provides High Availability for all 8 CAD Workstations.
Inventory Primary Node - cad.yourdomain.com (102.10.20.30) | This is the main node that is in the DMZ and faces the Internet |
VM - vmmax Cloud Center - CC1 (1.1.2.2) | This is the vmmax Control Center on the Primary Node |
Replica Node - cad.yourdomain.com (102.10.20.40) | This is the replica node that is in the DMZ and faces the Internet |
VM - vmmax Cloud Center - CC2 (1.1.2.2) | This is the vmmax Control Center on the Replica Node |
CAD Node 1 - cad1.yourdomain.local | This is a CAD Node that is in the LAN |
VM - GoldCADImage (powered off) | This is a CAD Workstation Virtual Machine |
VM - cad01 | This is a CAD Workstation Virtual Machine |
VM - cad02 | This is a CAD Workstation Virtual Machine |
VM - cad03 | This is a CAD Workstation Virtual Machine |
VM - cad04 | This is a CAD Workstation Virtual Machine |
CAD Node 2 - cad2.yourdomain.local | This is a CAD Node that is in the LAN |
VM - GoldCADImage (powered off) | This is a CAD Workstation Virtual Machine |
VM - cad05 | This is a CAD Workstation Virtual Machine |
VM - cad06 | This is a CAD Workstation Virtual Machine |
VM - cad07 | This is a CAD Workstation Virtual Machine |
VM - cad08 | This is a CAD Workstation Virtual Machine |
Our Approach:
- Setup Round-Robin A-Record to cad.yourdomain.com (102.10.20.30 and 102.10.20.40)
- Set the HA Option on Cloud Node 102.10.20.30 to Primary
- Set the HA Option on Cloud Node 102.10.20.40 to Replica
- Create an Automatic Connect Pool in the Primay vmmax Control Center (CC1) with following settings:
- Display Name = CAD
- Primary Node = cad1.yourdomain.local
- Fail-Safe Node = cad2.yourdomain.local
- Setup following inventory in the CAD Pool:
- cad01 - user1
- cad02 - user2
- cad03 - user3
- cad04 - user4
- Create a second Automatic Connect Pool in the Primay vmmax Control Center (CC1) with following settings:
- Display Name = CAD
- Primary Node = cad2.yourdomain.local
- Fail-Safe Node = cad1.yourdomain.local
- Setup following inventory in the CAD Pool:
- cad05 - user5
- cad06 - user6
- cad07 - user7
- cad08 - user8
Conclusion
With this setup we have: - Ensured that users come in on the same domain cad.yourdomain.com
- In transparency to the user, we have a backup vmmax Control Center
- We have 4 active users on CAD Node 1 and 4 active users on CAD Node 2 (Active/Active Cluster)
- We have ensured that users can continue to work with the Fail-Safe alternate that will automatically divert 4 users
12.3 Backup Strategy
Protecting data is essential! A robust data backup strategy can, in the event of a disaster such as ransomware, flood, or power outage, help you get up and running as soon as possible. Here are the top 5 must-haves in a data backup strategy.
Onsite Backups: When a server crashes or fails, it is helpful to have data backups on hand for easy restoration. It's a cliché, but time is indeed money. Onsite backups are often faster to restore than cloud backups and almost always faster than offsite tape backups.
Offsite Backups: Onsite backups are valuable, but they cannot be counted on alone. Should something disastrous happen to the data center, it could also damage any backups you have in the building. For that reason, it is always wise to have copies of your backups offsite where they can be accessed manually or through the cloud.
Optimized Backup Schedule: Backups are not a one and done process. Key data in your data center must be regularly and consistently backed up according to to a clear and organized schedule. Check out our blog article on just a few backup rotation schemes for more information.
Backup Testing: Backups need to be tested and need to be tested regularly. In addition, the IT staff must be trained on how to access and restore their data backups as quickly as possible. A backup that fails or a team that is unable to restore the backup quickly undermines the company's investment in a backup solution in the first place.
Organized Storage System: Mostly applying to tape-based backup solutions, the storage repository for backups and labeling system must be clear and organized. The team cannot commit extra time digging through box after box of tape looking for a specific backup from a specific date several years ago.
The 3-2-1 Backup Strategy
The 3-2-1 backup strategy is well-known across the industry. Despite drastic changes to the technology powering backups and even calls for a 3-1-2, 3-2-2, and 3-2-3 configurations, the 3-2-1 backup strategy provides a baseline rule by which companies can protect the data on which they rely.
The 3-2-1 backup strategy states that you should keep:
- At least THREE copies of your data;
- Backed-up data on TWO different storage types;
- At least ONE copy of the data offsite.
Speed Is The Key
Central to all of these backup must-haves is speed. Backups not only need to be reliable and accessible, but the company needs to be able to restore the data quickly. When assessing possible data backup strategies in your environment, do not lose sight of this metric.
The Backup Manager
You can take manual backups of any Container or Virtual Machine with the Backup Manager and/or automate your backups with a scheduled backup script using the vmmaxCLI Client. However, additionally, we recommend the implementation of a professional backup solution like Veeam, Commvault, or Acronis. 12.4 Shared Stores
Shared Store is a single storage resource pool that is shared by multiple computer/server resources. It allows servers to save data and files on a shared storage system, designed to be independent of each server or computer. It is also designed to be much faster, more reliable and easier to scale. This basic concept has been utilized for years in order to save space and network bandwidth. Shared Storage technology simplifies the processes of accessing, migrating and archiving data. It is essential for achieving high-availability (HA) and to a large extent enables efficient “disaster recovery”, “continuous data protection” (CDP) and “business continuity” storage features.
Although we highly recommend for security and performance the compartmentalized approach where each Cloud Node has its own storage, there are situations where a shared storage may be beneficial for large volumes of data like large Backup Volumes, Hadoop Clustering, Data Science, or Big Data.
The vmmax Cloud Node supports the NFS, CEPH, and GLUSTERFS out of the box and you can use the vmmaxCLI Client to mount the Backup Store, VM Store, and the CN Store to an external high performance storage unit like Pure Storage, Dell EMC Isilon, or Hitachi Vantara.
It is important that the external storage unit must have a minimum of 2 x 10 Gbits network connection speed to be able to support the IO load of a running Virtual Machine or Container. Furthermore, please use only storage clusters with node/disk redundancy to eliminate single point of failure.
So what is a Shared Store? A Shared Store is “centralizing” data in one “place” however it is more than just that. In today's business environment, it is imperative that data be accessible on a 24/7 basis and not be objected to hardware issues. So for example, if a physical server fails, when the shared storage pool is available, you can power-up VMs and workloads from the failed server to a different host. In this way, the VMs/workloads will continue running, without any data loss, since their data was saved on the shared storage system, not on the local drives of the failed server.
Example Implementation
Our Objective is to setup 3 vmmax Cloud Nodes that utilize a shared VM/CN Store and Backup Store.
Our Approach:
- Setup 3 vmmax Cloud Nodes with the vmmax Cloud Platform Installer
- On our storage device we export the following nfs drives:
- /vmmax/vms
- /vmmax/cns
- /vmmax/backup
- On each node we issue the follwoing commands with the vmmaxCLI Client:
- To connect to the node
connect YOURNODETOKEN
- For the VM Store
vm store mount nsf 10.10.10.10:/vmmax/vms
- For the CN Store
cn store mount nsf 10.10.10.10:/vmmax/cns
- For the BAckup Store
backup store mount nsf 10.10.10.10:/vmmax/backup
- Finally we exit
exit
12.5 SSL Certificates
What makes vmmax Cloud stand out, is the fact that it is designed from the ground up to be private and secure. Placing a system on the DMZ, to face the Internet, can be a challenging security task. Buying, configuring, and renewing SSL Certificates for example, can become costly and time consuming, especially if you provide many services for mobility. Our simplistic design of our cloud node system has eliminated these challenges with the built in Gateway technology.
Let's Encrypt Services
Let's Encrypt, a nonprofit Certificate Authority providing SSL/TLS certificates to 260 million websites, provides automated SSL Certificates for free. We support this organization with donations, please consider donating too.
We have integrated Let's Encrypt services in to our Gateway Manager which automatically installs and renews SSL Certificates for all you domain names facing the public Internet.
Note: You can read here more about how to setup a domain in which a certified vmmax node automatically installs/renews SSL Certificates for domains.