BIG-IP Per-App VE
Using BIG-IP Per-App VE outside of BIG-IP Cloud Edition. BIG-IP Per-App VEs can be purchased outside of BIG-IP Cloud Edition. Available as a bundle of licenses, and with a free BIG-IQ license manager component, BIG-IP Per-App
A BIG-IP Per-App VE is a specially licensed BIG-IP instance that has been designed to provide dedicated services for a single application. The full features of BIG-IP software are enabled, but it is right-sized for use as a dedicated device.
Each BIG-IP Per-App VE comes with:
- Single virtual IP address
- Three virtual servers (a combination of a virtual address and a listening port)
- 25 Mbps or 200 Mbps throughput
There are two software module options available in BIG-IP Per-App VEs:
BIG-IP Local Traffic Manager
F5 BIG-IP Local Traffic Manager™ (LTM) software delivers industry-leading application traffic management, including advanced load balancing, rate shaping, content routing, SSL management, and complete control of the application layer traffic in both directions.
F5 Advanced WAF
F5 Advanced WAF offers all the features of a traditional web application firewall (WAF) plus enhanced protection in the form of layer 7 DDoS mitigation, advanced bot detection, and API security management. Advanced WAF comes with a set of BIG-IP LTM traffic management features to effectively manage traffic to downstream application servers. The deployment of Advanced WAF policies are managed as part of the application template component.
Virtual machine requirements
BIG-IP Per-App VEs benefit from the platform streamlining of image and disk sizes that has occurred in recent releases of BIG-IP. In traditional BIG-IP deployments, BIG-IP software versions were done “in place” by downloading a new software image onto a running device then following an upgrade procedure. With BIG-IP Cloud Edition, the devices providing application delivery and security services are, for the most part, immutable so changes are not made directly to the device configurations, but rather are deployed using the device and application templates. Then the old versions are retired in a rolling upgrade. Additional storage space for multiple versions of the BIG-IP software is not, therefore, required and the disk image size can be shrunk.
- In VMware deployments, BIG-IP Per-App VEs are available in non-upgradable images with reduced storage footprints. For details on virtual machine specifications in VMware see theVirtual Edition Setup Guide for ESXi.
- For production use on AWS, F5 recommends M3 or M4 image types, with a minimum of two virtual cores and 4 GB of memory for BIG-IP LTM deployments and 8 GB for Advanced WAF.
- For production use on Microsoft Azure, F5 recommends Standard B2s and B2ms image types, with a minimum of two virtual cores and 4 GB of memory for BIG-IP LTM deployments and 8 GB for Advanced WAF.
Scaling and managing BIG-IP Per-App instances in a service scaling group
VMware–BIG-IP service scalers
In VMware, per-app traffic to BIG-IP Per-App VEs is scaled via a specialized BIG-IP cluster using MAC address forwarding, which preserves the client source and destination IP addresses. This can be important for some of the layer 7 functionality offered by the BIG-IP Per-App VEs, and also ensures accurate data collection for the visibility services that BIG-IQ offers.
BIG-IP service scalers perform basic load balancing across BIG-IP Per-App VEs and have no license limit on throughput (however, virtual hardware resources will obviously limit maximum throughput). Optionally, the service scaler can be enabled with firewall capabilities offering network ACLs and layer 4 DoS mitigation capabilities. The service scalers cannot perform SSL or layer 7 functions at this time.
BIG-IP service scalers require the following virtual machine specifications:
Minimum | Maximum | |
vCPU | 2[1] | 4 |
Memory | 4 GB | 16 GB[2] |
Disk Space | 40 GB[3] | 82 GB |
Network Interface Cards | 4 | 10 |
BIG-IP service scalers can belong to more than one service scaling group and can be shared across multiple applications (while BIG-IP Per-App VEs are—as the name suggests—dedicated to a single application).
Setting up and configuring service scalers in a service scaling group is covered inBIG-IQ Centralized Management: Local Traffic & Network Implementations.
[1]Four vCPUs required for additional firewall functionality.
AWS ELB Classic
In AWS, services are scaled using Elastic Load Balancing (ELB) Classic instances. ELB Classic provides basic L4 load balancing and availability across BIG-IP Per-App VEs, and a logical instance of ELB is dedicated to a single service scaling group. Each application therefore requires a dedicated ELB configuration. The AWS service manages the scaling of ELB instances to meet demands.
Setting up AWS ELB instances in a service scaling group is covered inBIG-IQ Centralized Management: Managing Applications in an Auto-Scaled AWS Cloud.
Azure Load Balancer
In Azure, services are scaled using Azure Load Balancer instances. Load Balancer provides basic L4 load balancing and availability across BIG-IP Per-App VEs, and a logical instance of Load Balancer is dedicated to a single service scaling group. As a result, each application requires a dedicated Load Balancer configuration. The Azure service manages the scaling of Load Balancer instances to meet demands.
Setting up Azure Load Balancer instances in a service scaling group is explained inBIG-IQ Centralized Management: Managing Applications in an Auto-Scaled Azure Cloud.
BIG-IQ
BIG-IQ can manage more than BIG-IP Per-App VEs.
BIG-IQ can discover and manage BIG-IP instances of all supported software versions—no matter what the platform or location. The platform can perform device management, visualize statistics, and deploy templated application service configurations onto physical, virtual, and cloud-deployed BIG-IP instances. BIG-IQ can even offer autoscaling for supported, traditional (not per-app) BIG-IP VEs on supported platforms (currently AWS, Azure and VMware).
BIG-IQ provides centralized management for all components that make up BIG-IP Cloud Edition. All activities and reporting are managed via BIG-IQ and administrative access to BIG-IP Per-App VEs is not required.
BIG-IQ:
- Creates new service scaling groups
- Within the service scaling group, device templates are referenced to manage the life cycle of BIG-IP Per-App VEs. Device templates include all of the information needed to spin up a BIG-IP Per-App VE and requires no human intervention.
- Provides deep analytics at the application level so that application owners can troubleshoot their own issues.
- Provides device-level performance and capacity metrics for trouble shooting and planning.
- Offers role-based access allowing application owners to deploy F5 L4–7 services for an application via predefined application templates from the service catalog in a self-service manner.
F5 recommends the following virtual hardware for BIG-IQ in a BIG-IP Cloud Edition deployment.
Minimum | Maximum | |
vCPU | 4 | 8 |
Memory | 4 GB | 16 GB |
Disk Space | 95 GB | 500 GB |
Network Interface Cards | 2 | 10 |
Installing and configuring BIG-IQ is covered in thePlanning and Implementing an F5 BIG-IQ Centralized Management Deployment Guide.
BIG-IQ communication with virtual infrastructure management
BIG-IQ is capable of starting, licensing, provisioning, and configuring BIG-IP Per-App VEs on demand, as part of a service scaling group or in a scale-out environment. This requires authenticated access into the virtual infrastructure environment.
In VMware
In VMware, the following is required: credentials to access vCenter, a vCenter hostname, SSL certificate for secure communication, and other information about the ESX environment such as hosts/clusters, datastores, (distributed) virtual switches (vSwitches), and resource pools.
In AWS
In AWS, the following is required: Identity and Access Management (IAM) user access key and associated secret to make API calls and ELBs to provide tier-one traffic distribution. FollowAWS best practicesto create and manage the keys.
The IAM user should have the administrator access policy attached and have permission to create auto-scaling groups, Amazon Simple Storage Service (S3) buckets, instances, and IAM instance profiles. For details on permissions and overall AWS configuration, seehttps://aws.amazon.com/documentation.
BIG-IQ high availability and backup
Since BIG-IP Cloud Edition essentially routes all control plane activities through the BIG-IQ management layer—BIG-IQ handles real-time monitoring and scale-in/out events and manages license assignment and revocation—it becomes a critical part of the delivery system and therefore is typically deployed in a highly available, redundant configuration.
Planning should, therefore, include an active-standby BIG-IQ pair, with the appropriate license for the number of BIG-IP instances under management.
Configuring BIG-IQ for high availability is covered in thePlanning and Implementing an F5 BIG-IQ Centralized Management Deployment Guide.
BIG-IQ Data Collection Devices
Data Collection Devices in BIG-IQ are responsible for collecting, storing, and processing traffic and performance data from the BIG-IP Per-App VEs. After BIG-IP Per-App VEs send performance and traffic telemetry to Data Collection Devices to process and store, BIG-IQ queries the Data Collection Devices to provide visibility and reporting. Data Collection Devices are arranged into clusters that work together and replicate stored data for redundancy purposes.
F5 recommends the following virtual hardware for Data Collection Devices used in BIG-IP Cloud Edition:
vCPU | 8 |
Memory | 32 |
Disk Space | 500 GB |
Network Interface Cards | 2 |
A note on disk subsystems:BIG-IQ Data Collection Devices store, process, and analyze data collected from BIG-IP Per-App VEs to produce reports and dashboards for the BIG-IQ system. This is a disk I/O intensive workload, so the underlying storage should be sized for both capacity and performance. For large deployments of BIG-IP Per-App VEs or extensive logging and analysis, high-performance storage subsystems should be deployed. Capture search and indexing operations will generate both random and sequential I/O often with high concurrency of tasks.
For additional information see theBIG-IQ Centralized Management Data Collection Devices Sizing Guide.
Networking and Connectivity
VPN for Data Collection Devices withPublic Cloud deployments
When new BIG-IP Per-App VEs are created, they are given the self-IP address of the Data Collection Devices they should connect back to. This is a fixed setting (as of BIG-IQ 6.0). Connections in both directions are required between the Data Collection Devices and the BIG-IP Per-App VE. In many environments—but especially when BIG-IP Per-App VEs are on AWS or Azure and BIG-IQ and Data Collection Devices are on the customer premises—VPN connectivity will be required to successfully route traffic in both directions, since the Data Collection Devices will generally have an RFC 1918 non-routable IP address. BIG-IP Cloud Edition requires unique IP address ranges across Amazon Virtual Private Cloud (Amazon VPC) or Azure Virtual Network (Azure VNet), meaning that they cannot have overlapping address spaces in Amazon VPC.