Category Archives: VMware vSphere 5

Biggest Game Changer Since Server Virtualization

Link : Storage Optimization Software for the Virtual Data Center.

Atlantis ILIO USX—Biggest Game Changer Since Server Virtualization

Deploy Up to 5X More VMs on Any Storage, Lower Costs by Up to 50%

Atlantis ILIO USX™ is a significant breakthrough in virtualization technology, delivering a solution that unlocks the underutilized capacity of over $50 billion of deployed enterprise storage, similar to what VMware did for server virtualization.

Atlantis ILIO USX software gives IT the flexibility to get more out of their existing storage, even their older arrays, and to create new software-defined storage hybrid arrays, hyper-converged systems, and all-flash arrays by aggregating and pooling existing server SSDs, SAS, flash, and RAM together with shared SAN/NAS arrays.


Atlantis ILIO USX pools and optimizes server RAM, SAS, and/or Flash to create a highly scalable, hyper-converged platform using existing servers. Atlantis ILIO USX provides the flexibility to pool commodity local storage with RAM and/or flash across multiple server farms. By doing so, customers can seamlessly scale out their architecture to create a hyper-converged infrastructure and manage their applications without having to rip out their existing infrastructure.

Atlantis ILIO USX optimizes how storage is consumed by the application or VM by inserting a transparent software layer between the application and storage. The Atlantis ILIO USX software resides on the hypervisor platform as a set of virtual machines that can abstract any storage hardware into pools of virtual storage that can be combined together to form an Application Defined Storage Volume (ADS Volume). There are two types of storage pools:

  1. Capacity Pool: Any type of storage is pooled together to provide capacity for ADS Volumes. SAN, NAS, and local storage including SSD and Flash can be pooled in any combination to provide the underlying capacity required by applications and VMs.
  2. Memory Pool: High-performance server resources such as flash, RAM or even flash on DIMM are pooled and optimized to provide the performance required by the application storage volume. The memory pool can be used either as an optimization and acceleration tier or as primary storage for applications and VMs.

These Capacity and Memory pools are combined together automatically to create Application Defined Storage Volumes (ADS Volume) with policy-based controls that apply the ideal combination of storage capacity, performance and availability for the application. The ADS Volume provides enterprise-class storage functionality including high-availability, data protection, thin provisioning and cloning. Multiple ADS Volumes can be created from a single Capacity and Performance Pool, enabling true application-centric storage for the first time.

The Atlantis ILIO USX platform applies multiple storage optimization technologies to boost the performance and increase the available storage capacity provided to the application. At the same time, Atlantis ILIO USX dramatically reduces the impact of application IO traffic on storage resource utilization (disk, storage controller, network).

Use with existing Shared Storage

Use Local SAS or SATA

Use RAM as Primary Storage

Optimized All-Flash Arrays

In-Memory Architecture — Atlantis ILIO USX can run VMs completely in server RAM to deliver high-speed, low latency runtime storage without re-architecting applications

IO Processing — Atlantis ILIO USX processes IO operations in real-time at the compute layer to lower latency and reduce network traffic

Inline De-duplication — Atlantis ILIO USX performs inline de-duplication in real-time on-the-wire with microsecond latency, eliminating up to 90% of storage IO traffic

Real-Time Compression — Atlantis ILIO USX compresses the optimized blocks In-Memory with microsecond latency

IO Blender Fix — Atlantis ILIO USX coalesces small random blocks generated by the hypervisor into larger sequential blocks, greatly improving storage access and efficiency

High Availability — Atlantis ILIO USX provides integrated high-availability and data protection to prevent application downtime.

Thin Provisioning — All Atlantis ILIO USX storage volumes are automatically thin provisioned with up to 10:1 consolidation.

Fast Clone — Atlantis ILIO USX can clone full VMs in as little as 4 seconds with no network or storage traffic.

Tagged , ,

VMware Adaptive Storage Load Balancing

Adaptive PSP  is  PSP plug-in that VMware  developed to adaptively switch load-balancing strategies based on system load.

Moving forward, VMware are working to further enhance the adaptive logic by introducing a path scoring attribute that ranks different paths based on I/O latency, bandwidth, and other factors.

The score is used to decide whether a specific path should be used for different system I/O load conditions. Further, the logic decides  the percentage of I/O requests that should be dispatched to a certain path, and could also combine the path score with I/O priorities, by introducing priority queuing within PSA.

Virtual Flash vFlash Tech Preview

Article from VMware vSphere Blog


Virtual Flash (vFlash) Tech Preview

This is the third and final tech preview of storage features which were shown at VMworld 2012. Already we have looked at two tech previews – Distributed Storage and Virtual Volumes (VVOLs). This next feature is vFlash or Virtual Flash, and will look at a project underway here at VMware to integrate local flash devices with vSphere. This will make significant performance improvements and reduce I/O latency for many workloads running in the Virtual Machine.

To date, VMware has done very little around flash. We only have the smartd introduced in vSphere 5.1 to monitor Solid State Drives (SSD) attributes and the swap to SSD feature. Those of you who have read the technical preview article on Distributed Storage will have read about SSD being used as a cache (for both read and write I/O). This post discusses an additional project called vFlash, the purpose of which is to integrate vSphere with local flash devices. What we are looking to do is to enable a new tier of storage for your Virtual Machines. For those of you unfamiliar with flash technology, I recently wrote a blog article on my personal blog which will give you a pretty decent overview.

Please note that since this is a tech preview article, there is no guarantee when this feature will appear (if ever), nor is there any guarantee that the end product will encompass any or all of the attributes discussed in the post. It is simply to give you an idea of what we are working on, and what features the final product might include. vFlash was discussed at VMworld 2012 & my colleague Duncan provides a nice overview of the session on his blog here.

vFlash Infrastructure

We want our customers to be able to select off the shelf flash devices, be they SSD or PCIe I/O Accelerator cards. Once the flash devices are plugged into vSphere hosts, we have a toolset available in vSphere to manage flash as a resource, just like you currently manage CPU and memory resources. Basically, you will be creating a flash pool, the contents of which will be carved up to provide flash resources to individual VMs using constructs that you are already familiar with, such as reservations, limits & shares.

vFlash is a framework. VMware will be providing our own default software to plugin (vFC) to the framework, but other flash vendors can create their own plugins (vFlash Cache Modules) containing bespoke and proprietary algorithms for utilizing their specific cache/flash devices. The vision is to publish a set of APIs to support this, and share them with potential partners.

The vFlash infrastructure does not specify which Virtual Disk data is to be cached in vFlash. That decision is left up to the vFlash Cache Module. It is envisioned that the vFlash Framework will support multiple vFlash Cache Modules per ESXi host. Different Virtual Machines running on the same host can be configured to use different vFlash Cache Modules. This permits the vFlash caching algorithm used for a given Virtual Disk to be tailored to the storage I/O behavior of the application running in the Virtual Machine using that Virtual Disk.

That caching software can come in two forms – VM-aware caching & VM-transparent caching. With VM-transparent caching, the VMs will share a pool of flash resources and are not aware that they are doing so; they will simply benefit from having flash in their I/O path. With VM-aware caching, chunks of flash can be allocated on a per VM basis, and these will show up as SCSI disks in the Guest OS.

VM-aware Caching (vFlash Memory)

This is where a flash resource is presented directly to the VM. A new virtual hardware device called vFlash Memory is added to the Virtual Machine. The interesting part of this approach is that the caching algorithm needs to be controlled by the VM, and not by the caching software on the hypervisor. This may possibly entail the installation of agents in the Guest OS, in a similar fashion to how some flash vendors currently do things. The cache appears as a disk drive to the Guest OS, which can then be formatted and used appropriately by applications in the Guest OS, and thus can benefit from having access to a flash device.

VM-transparent Caching (vFlash Cache)

This is where the VM is unaware that there is a flash cache in the I/O path, but benefits from it all the same. A new virtual hardware device called vFlash Cache is added to the VM. In this case, cache software on the hypervisor is there to provide a suitable algorithm for the I/O. Options available during the configuration of VM-transparent caching will include reservation size, the choice of vFlash Cache Module, mode of operation (write-back or write-thru), block size (tuned to Guest OS requirements) and what to do with the flash contents during a migration (migrate or drop). The user will have the option of migrating flash contents if the destination host is compatible. The default option is to attempt to migrate the content, but if the destination host is incompatible, then we will drop the flash contents and rebuild it on the destination host. Obviously this will have a performance impact as the flash contents would need to ‘warm up’ on the destination host. VMs with flash cannot be migrated to a destiantion host with insufficient vFlash resources because even if the contents are not migrated, the flash contents will need to be rebuilt at the destination host and we need to ensure that appropriate resources are available to do this. vCenter compatability checks will fail the migration if the destination host does not have sufficient cache resources.

Similarly, when considering vSphere HA, flash will need to be pre-allocated or set aside to ensure that enough resources are available for virtual machines to successfully restart on remaining hosts in the event of a host failure.

Direct attached flash storage is gaining significant momentum right now, and is an ideal reource to provide low latency for latency sensitive applications, enabling even more tier 1 applications to be virtualized. VMware wants to enable our customers to leverage flash resources through well known and well understood vSphere mechanisms.

Get notification of these blogs postings and more VMware Storage information by following me on Twitter:  @VMwareStorage

What is Diskless VDI?

Diskless Virtual Desktop Infrastructure (VDI) is the concept of using local server memory in combination with storage optimization software to store virtual desktop images instead of shared SAN/NAS or local SAS/SSD storage.

By storing virtual desktop images on the local memory of the hypervisor where the desktops execute, response time are faster than even the most expensive local SSD drives (MLC or SLC), cost less when combined with Atlantis ILIO, and increase reliability.

With existing VDI architectures, virtual desktop images are stored on either shared SAN/NAS storage or local SSD disks, which are costly, have limited IOPS for write-intensive VDI workloads, can have a limited lifespan and consume more power than memory.




  1. Software only – Atlantis ILIO Diskless VDI is a purpose built storage layer to run virtual desktops with just CPU and RAM and no other storage or SSDs. Scale-out VDI infrastructure with just servers and software.
  2. Amazing user experience – 300+ IOPS/user* – faster than PC user experience  even with iPads.
  3. CAPEX below $200/user – infrastructure cost under £125 per desktop including the server hardware, RAM and Atlantis ILIO.
  4. Lower OPEX – Enable lower operating expenses by eliminating rack space, power consumption, cooling and repair costs, and daily operational tasks of maintaining disk-based storage.
  5. Automated multi-rack deployment – Automatically install and configure ILIO on hundreds of servers across dozens of racks. Creates and registers NFS data stores that are ready to use by the VDI broker to complete the provisioning process.

Cut VDI CAPEX by 50-75%

Cut VDI CAPEX by 50-75%

  • Use Less Storage—cut the amount of VDI storage required by up to 90%
  • Scale VDI Storage—support 5 to 10 times the number of desktops on the same storage with equal or better performance
  • Use Any Storage —use lower cost SAN, NAS and Local Disk (SATA, SAS, SSD)

Boost VDI Performance

  • Desktop Performance—make desktop boot, login, and application startup perform faster
  • Accelerate Application Virtualization—accelerate virtualized applications for VMware ThinApp, Microsoft App-V, and Citrix XenApp App Streaming to improve logon times for non-persistent desktops and launch times for virtualized applications

Inline Deduplication

Atlantis ILIO transparently offloads write operations before they reach the storage fabric.

  • VDI Storage IOPS Offload—up to 90% reduction in IOPS load on storage infrastructure
  • VDI Storage Capacity Reduction—up to 95 % reduction in disk storage capacity requirements
  • VDI Latency—accelerates VDI performance by eliminating disk write latency

Windows NTFS Protocol Layer Processing

  • Hyper-efficient IO characterization—instantly characterizes IO based on Windows NTFS file system characteristics
  • Faster IO Processing—real-time IO processing locally from memory
  • IO Optimization—re-sequences read/write operations from small random IO to large sequential

Integrates with Leading VDI Solutions

  • Certified Interoperable—VMware View Technology Alliance Partner and Citrix Ready Partner
  • Leverages Existing Infrastructure—VDI brokers, application virtualization, profile virtualization, and image layering tools
  • No Desktop Changes—doesn’t require an agent or any change to the desktop image
  • Deployed Using Open Virtualization Format (OVF)—easily installed in 8 minutes as an .OVF virtual appliance
Tagged , , , ,

vSphere 5.1 vMotion Deepdive.

A big change in vSphere 5.1 is the vMotion capabilities, where there is no need for shared storage between ESX hosts for Virtual Machines to migrate to and from. In previous versions of vSphere there was a requirement of shared storage.

See this in depth article on vSphere 5.1 vMotion to get a better understanding.


Tagged , ,

VMware vSphere 5.1 released

WHATS NEW in VMware vSphere 5.1

vSphere 5.1 is VMware’s latest release of its industry-leading virtualization platform. This new release contains the following new features and enhancements:

Key changes:-

1) vRAM has been killed. vSphere is again licensed per CPU just as it was before vSphere 5.0. The licensing whitepaper is here.

2) Enhanced vMotion allows you to combine a vMotion and Storage vMotion into a single operation, without shared storage.

3) vSphere replication has been decoupled from SRM, and released as an available feature of every vSphere license from Essentials Plus through Enterprise Plus

4) New VMware vSphere Storage Appliance 5.1 for SMBs

5) New backup solution named vSphere Data Protection replacing VDR.


• Larger virtual machines – Virtual machines can grow two times larger than in any previous release to support even the most advanced applications. Virtual machines can now have up to 64 virtual CPUs (vCPUs) and 1TB of virtual RAM (vRAM).

• New virtual machine format – New features in the virtual machine format (version 9) in vSphere 5.1 include support for larger virtual machines, CPU performance counters and virtual shared graphics acceleration designed for enhanced performance.

• vSphere 5.1 introduces Flexible Space Efficiency (Flex-SE), a disk format to achieve the right balance of space efficiency and I/O throughput. This balance can be managed throughout the life cycle of a VM, from storage allocation (controlling the allocation block size) to how the blocks are managed after they are allocated (deleted blocks can be reclaimed). This feature enables the user to determine the right level of storage efficiency for a deployment. For example, you can use Flex-SE to optimize storage efficiency for virtual desktop infrastructure (VDI).

This feature will reclaim deleted storage in thin provisioned disk. Before vSphere 5.1 it was not possible to use the storage capacity freed when files are deleted inside the guest operating system.

• VMware increased the support for in guest clustering. In previous versions the support was limited to maximum 2 nodes running as a VM. Now support for up to 5 node Microsoft cluster using the Node Majority Model.

• vSphere 5,1 now supports a Storage vMotion up to 4 parallel disk migrations per operation. Support for 16 Gbps Fiber Channel has been added.
Support for boot from Software Fiber Channel over Ethernet ( FCoE ).

• Storage I/O Control (SIOC)
vSphere 5.1 improves SIOC functionality by automatically computing the best latency threshold for a datastore in lieu of using a default or user selected value. This latency threshold is determined by modeling, when 90% of the throughput value is achieved.

• vSphere 5.1 enables more granular latency measurement for I/O load balancing called “VMobservedLatency”. This is achieved by measuring the I/O request-response time between a VM and the datastore. In vSphere 5.0, latency was measured as theI/O request-response time between the host and the datastore.

• Advanced I/O Device Management
vSphere 5.1 introduces new commands for troubleshooting I/O adapters and storage fabrics. This enables diagnosis and querying of FC, and Fibre Channel over Ethernet (FCoE) adapters, providing statistical information that allows the administrator
to identify issues along the entire storage chain from the HBA to the ESXi, fabric and storage port.


•vSphere Distributed Switch – Enhancements such as

Network Health Check, Configuration Backup and Restore, Roll Back and Recovery, and Link Aggregation Control Protocol (LACP) support and deliver more enterprise-class networking functionality and a more robust foundation for cloud computing.

• Single-root I/O virtualization (SR-IOV) support

– Support for SR-IOV optimizes performance for sophisticated applications.


• vSphere vMotion® – Leverage the advantages of vMotion (zero-downtime migration) without the need for shared storage configurations. This new vMotion capability applies to the entire network.

• vSphere Data Protection – Simple and cost effective backup and recovery for virtual machines. vSphere Data Protection is a newly architected solution based EMC Avamar technology that allows admins to back up virtual machine data to disk without the need of agents and with built-in deduplication. This feature replaces the vSphere Data Recovery product available with previous releases of vSphere.

• vSphere Replication (separated from SRM) – vSphere Replication enables efficient array-agnostic replication of virtual machine data over the LAN or WAN. vSphere Replication simplifies management enabling replication at the virtual machine level and enables RPOs as low as 15 minutes.

• Zero-downtime upgrade for VMware Tools – After you upgrade to the VMware Tools available with version 5.1, no reboots will be required for subsequent VMware Tools upgrades.


• VMware vShield Endpoint™ – Delivers a proven endpoint security solution to any workload with an approach that is simplified, efficient, and cloud-aware. vShield Endpoint enables 3rd party endpoint security solutions to eliminate the agent footprint from the virtual machines, offload intelligence to a security virtual appliance, and run scans with minimal impact.


• vSphere Storage DRS™ and Profile-Driven Storage

– New integration with VMware vCloud® Director™ enables further storage efficiencies and automation in a private cloud environment.

• vSphere Auto Deploy™ – Two new methods for deploying new vSphere hosts to an environment make the Auto Deploy process more highly available then ever before. Management (with vCenter Server)

• vSphere Web Client –The vSphere Web Client is now the core administrative interface for vSphere. This new flexible, robust interface simplifies vSphere control through shortcut navigation, custom tagging, enhanced scalability, and the ability to manage from anywhere with Internet Explorer or Firefox-enabled devices.

• vCenter Single Sign-On – Dramatically simplify vSphere admin- istration by allowing users to log in once to access all instances or layers of vCenter without the need for further authentication.

• vCenter Orchestrator – Orchestrator simplifies installation and configuration of the powerful workflow engine in vCenter Server. Newly designed workflows enhance ease of use, and can also be launched directly from the new vSphere Web Client.


Free virtual machine backup

Veeam Backup Free Edition for VMware and Hyper-V

Veeam offers a free version of its award-winning Veeam Backup & Replication™ software. The
free version, Veeam Backup™ Free Edition, provides a subset of the functionality in the full (paid)
editions of Veeam Backup & Replication.

The free and full editions use the same download and install, with functionality controlled by the
presence of absence of a license key.

The software operates in free mode when no license key is present, or when an expired
license key is present. There is no limit on the number of hosts, sockets or virtual
machines (VMs).

The full editions require a valid (unexpired) license key, and are limited to the number of
host CPU sockets specified in the license key for each hypervisor (VMware or Hyper-V)


vSphere 5.0 Hardening Guide – Official Release

This is the official release of the vSphere 5.0 Security Hardening Guide, v1.0.  The format of this guide has changed from previous versions. The guide is being released as a Excel spreadsheet only.  The guideline metadata from earlier guides has been greatly expanded and standardized.  CLI commands for assessment and remediation of the guidelines is included for the vCLI, ESXi Shell, and PowerCLI.  For additional information, please see the Intro tab of the spreadsheet.




Tagged , ,