What The Heck Is F5 Networks’ TMOS?
What The Heck Is F5 Networks’ TMOS?
https://packetpushers.net/blog/what-the-heck-is-f5-networks-tmos/
Steven Iveson F5 Network’s Traffic Management Operating System (TMOS) is, first and foremost and for the sake of clarity, NOT an individual operating system. It is the software foundation for all of F5’s network or traffic (not data) products; physical or virtual. TMOS almost seems to be a concept rather than a concrete thing when you first try to understand it. I’ve struggled to find a truly definitive definition of TMOS in any manual or on any website.
So, what is TMOS? It’s not too tough after all, really; TMOS encompasses a collection of operating systems and firmware, all of which run on BIG-IP hardware appliances or within the BIG-IP Virtual Edition. BIG-IP and TMOS (and even TMM) are often used interchangeably where features, system and feature modules are concerned. This can be confusing; for instance, although LTM is a TMOS system module running within TMM, it’s commonly referred to as BIG-IP LTM. I suspect we have the F5 marketing team to thank for this muddled state of affairs. See the comments section for some clarification from F5 but some debate too.
TMOS and F5’s so-called ‘full application proxy’ architecture was introduced in 2004 with the release of v9.0. This is essentially where the BIG-IP software and hardware diverged; previously the hardware and software were simply both referred to as BIG-IP (or BIG-IP Controller). Now, the hardware or ‘platform’ is BIG-IP, and the software TMOS. Anything capable of running TMOS and supporting its full proxy counts as a BIG-IP so the virtualised version of TMOS is called BIG-IP Virtual Edition(VE) rather than TMOS VE. Where the VE editions are concerned, just the TMM and HMS software components of TMOS are present (more details soon).
The primary software elements of BIG-IP, collectively known as TMOS, encompass all of these things;
TMM; The Traffic Management Microkernel which includes; Software in the form of an operating system, system and feature modules (such as LTM), other modules (such as iRules) and multiple network ‘stacks’ and proxies; FastL4, FastHTTP, Fast Application Proxy, TCPExpress, IPv4, IPv6 and SCTP. Software in the form of the connection between TMM and the firmware that operates the dedicated SSL card and others. A ‘native’ SSL stack. Interfaces to the HMS. TurboFlex FPGA firmware. HMS; The Host Management Subsystem; this runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the GUI WebGUI, tmsh CLI, DNS client, SNMP and NTP. The HMS also contains an SSL stack (known as the COMPAT stack): OpenSSL, which can also be used by TMM where necessary. LTM; This and other ‘feature’ modules such as GTM and APM expose specific parts of TMM functionality when licensed. They are typically focussed on a particular type of service (load balancing, authentication and so on). AOM; Always On Management; a lights-out management system accessible through the management network interface and serial console only. This is independent of the HMS (despite the shared network interface) and can be used to reset the device. IPMI; Intelligent Platform Management Interface is a hardware-level interface specification and protocol supported on BIG-IP iSeries hardware. It allows for out of band monitoring and management of a system independently of (or without) an operating system and when the system is ‘off’. Like AOM, IPMI functions are accessible through the management network interface and serial console. MOS; A Maintenance Operating System; used for disk management, file system mounting and related maintenance tasks. EUD; End User Diagnostics; used to perform BIG-IP hardware tests. So, that’s six operating systems* (I’m not actually counting LTM) and related interfaces to understand. It sounds more complex that you might think; your average server has a BIOS, a RAID BIOS (the MOS) and an ILO or DRAC card (the AOM) and, along with the OS you install, that’s four already. Here’s a visual view:
BIG-IP TMOS Software Components
Let’s go into some further detail on each of these components.
Traffic Management Microkernel (TMM) TMM is the core component of TMOS as it handles all network activities and communicates directly with the network switch hardware (or vNICs for VE). TMM also controls communications to and from the HMS. Local Traffic Manager (LTM) and other modules run within the TMM.
TMM is single threaded until TMOS v11.3; on multi-processor or multi-core systems, Clustered Multi-Processing(CMP) is used to run multiple TMM instances/processes, one per core. From v11.3 two TMM processes are run per core, greatly increasing potential performance and throughput.
TMM shares hardware resources with the HMS (discussed next) but has access to all CPUs and the majority of RAM.
Host Management Subsystem (HMS) The Host Management Subsystem runs a modified version of the CentOS Linux operating system and provides the various interfaces and tools used to manage the system such as the WebGUI, Advanced (Bash) Shell, tmsh CLI, DNS client, SNMP and NTP client and/or server.
The HMS can be accessed through the dedicated management network interface, TMM switch interfaces or the serial console (either directly or via AOM).
The HMS shares hardware resources with TMM but only runs on a single CPU and is assigned a limited amount of RAM.
Always On Management (AOM) The AOM (another dedicated hardware subsystem) allows for ‘lights out’ power management of and console access to the HMS via the serial console or using SSH via the management network interface. AOM Is available on nearly all BIG-IP hardware platforms including the Enterprise Manager 4000 product, but not on VIPRION. Note AOM ‘shares’ the management network interface with the HMS.
Maintenance Operating System (MOS) MOS is installed in an additional boot location that is automatically created when TMOS version 10 or above is installed. MOS, which runs in RAM, is used for disk and file system maintenance purposes such as drive reformatting, volume mounting, system re-imaging and file retrieval. MOS also supports network access and file transfer.
MOS is entered by interrupting the standard boot process via the serial console (by selecting TMOS maintenance at the GRUB boot menu) or booting from USB media.
The grub_default -d command can be used to display the MOS version currently installed.
Only one copy of MOS is installed on the system (taken from the latest TMOS image file installed) regardless of the number of volumes present.
End User Diagnostics (EUD)
EUD is a software program used to perform a series of BIG-IP hardware tests – accessible via the serial console only on system boot. EUD is run from the boot menu or via supported USB media.
TMOS Operating Planes The different ‘planes’;
BIG-IP TMOS Software Component Planes
I hope this article helps clarify and explain what TMOS is all about; I know I was confused for years and understanding the true nature of TMOS has certainly helped me better understand and think more clearly about a great but ultimately complex product.
Updated: April 2018 to include TurboFlex and IPMI & update diagrams.
*As of v11 all these operating systems are 64-bit.
About Steven Iveson: The last of four children of the seventies, Steve was born in London and has never been too far from a shooting, bombing or riot. He’s now grateful to live in a small town in East Yorkshire in the north east of England. He’s worked in the IT industry for over 25 years in a variety of roles, predominantly in data centre environments. More recently he’s widened his skill set to embrace DevOps, Linux, containers, automation, orchestration, cloud, and more. He’s been awarded F5 Networks DevCentral MVP status seven times, in 2014, 2016, 2017, 2018, 2019, 2020 and 2021. Details of his F5 related books can be found here.
Twitter Related podcasts More info about Disco podcast artwork Network Automation Nerds listen in full NAN109: Simplify Your Network Operations with Extreme (Sponsored) Today Eric Chou dives deep into network automation and operational simplicity with guest Hardik Ajmera, VP of Product Management at Extreme Networks. In this sponsored episode, they talk about the ‘network fabric’, Extreme Platform ONE, and, of course, what’s next with AI in the world of enterprise networking. Hardik also shares how customers in complex … Read more » play PREVIEW 01:02 podcast artwork Network Automation Nerds listen in full NAN110: Bridging Industry, Academia, and the Future of Network Automation with Dr. Levi Perigo Eric Chou is joined by Dr. Levi Perigo, Scholar in Residence and Professor of Network Engineering at the University of Colorado, Boulder. They discuss Levi’s non-traditional career path from being in the network automation industry for 20 years before shifting to academia and co-founding QuivAR. Levi also dives into the success of the CU Boulder … Read more » play PREVIEW 01:04 podcast artwork Network Automation Nerds listen in full NAN108: Perspectives, Hopes, and Challenges of Young Network Engineers Let’s hear from the next generation of network engineers. Eric Chou sits down with Sem Eyob and Damon Hoody, two early-career network engineers, to talk about how they got into the profession and where they hope to go. They share their views on AI and its effect on their generation, their struggles finding entry level … Read more » play PREVIEW 01:00 Leave a Comment Name* Email* Comment
Comments: 22 Stephen Stack on April 22nd, 2013 - 1:36pm Great article again Steven. Man, this is one of the biggest pain points i had starting out with F5. It’s an LTM, wait, no…. BigIP… right? well, yes, but LTM is a module and TMOS is… and so on. This is a great explanation, and is doing the rounds in our office.
Reply What Lies Beneath on April 22nd, 2013 - 2:35pm Thanks Stephen. I didn’t even try and work this out until I started writing a book on the subject last year (unreleased at present). Even then, despite many, many hours of research over many months I didn’t put all the pieces together until a very kind person (who unfortunately I can’t name) gave me just enough ‘extra’ information for it all to make sense. Of course, it all seems obvious now.
Reply John on April 22nd, 2013 - 4:09pm Thanks for the article. I work for f5 in field engineering. Your article is probably a good sign we need to clean up the message if it is not clear. I’ll take that up the hill immediately.
First off, BigIP is the trademarked product marketing name for all things f5 based on our full proxy architecture. That’s it. It’s our ‘golden arches’.
Here is my too wordy technical explanation. Apologies in advance ..
I always explain our technical terms if the form of management, control, and data plane elements. Management plane elements interact with external actors to effect the system. Control plane elements form the basis to control the various aspects of a system. Data plane elements actually handle the traffic being managed for end users.
TMOS is the software ecosystem which forms the management, control, and dataplane of BigIP solutions.
As for hardware, we have various hardware components including switching fabric chips, our older home grown ASICs for connection offloading called PVA, our newer FPGA based offload technologies for offload and security, SSL/TLS offload ASICs, network processor for things like compression offloading, and generalised processors. These are means to an end and are driven by TMOS components.
TMMs are real-time software microkernels which form the overall L4-L7 intelligence for the data plane. If it’s in the TMM, it’s there to help push data traffic. We create clusters of these TMMs to linearly scale the traffic management data plane. TMM have direct driver level integration to much of our hardware. Think speed. It’s software which thinks like a switch.
We have some data plane software which does not run inside of the TMM for a multitude of reasons. These are called plugins base services. These include our WAF (product name ASM) which spends a great deal of its life consulting policy elements, and our acceleration based technologies which make use of other general I/O devices like disks. Think intense compute flexibility.
The rest of the TMOS eco-system is there for management and control plane functionality. Management plane clients include our SNMP agents, iControl service, GUI services, and TMSH clients. Control plane services include switchboard controllers, routing daemons, configuration services, etc.
I hope this isolation into management, control, and data plane components helps the understanding of what’s all in TMOS.
Reply What Lies Beneath on April 22nd, 2013 - 7:06pm Hi John. Thanks for taking the time to comment and educate, it’s
appreciated. Let’s quickly deal with my pedantic side first: the trademark is actually BIG-IP.
Swiftly moving on, I think this is mainly an issue because in some ways it’s not essential knowledge; as an F5 customer for six years I never bothered asking, the answer wasn’t going to enlighten anyone or solve any problems. Of course, understanding the full complement of software components that make up TMOS (perhaps I should call it BIG-IP TMOS now) is more relevant. I was unaware of MOS and AOM (and SCCP before it) for many years and would certainly benefited had I known about them.
Equally, I think it’s a common and enduring mistake for most to believe the HMS is the device’s operating system. I’ve incorrectly answered many an auditors question along these lines! There are many possible reasons for this I can imagine, most of which are valid (although some commercial); a few books from F5 for those that want to explore further would probably help.
I don’t find your take on things too enlightening myself. I get your point about the BIG-IP ‘brand’ but let’s be clear, when I buy an ADC from F5, I’m buying a BIG-IP switch (with TMOS software) right? Same for a VE (although it’s less clear cut). TMOS and more specifically TMM provides the full proxy functionality. Happy to be corrected here.
Also on the hardware side, I’ve read a few bits from Don regarding FPGAs but I’m not clear on where they are used at present, could you be more specific? I appreciate your points on TMM; it clarifies a few things for me. You still don’t describe what LTM (or other modules ignoring ASM etc.) are actually referred to as? TMM ‘native’ modules perhaps?
Personally I find the segmentation of function by ‘plane’ rather confusing in this case. I really struggle to see the difference between control and data. Management functions are relatively well defined but what, for instance, is an iRule? I’d assume control so does that mean a connection handled by an iRule passes through the control and data planes. How would I relate this all to the SDN forwarding plane?
In summary, I appreciate your time and effort but I have to say it’s frustrating I always seem to have a few ‘outstanding’ questions.
Reply What Lies Beneath on April 22nd, 2013 - 7:36pm I’ve updated the diagram to include LTM etc. (but I’ll ignore ASM – interested to know if it can take advantage of SMP or CMP?).
Reply John Gruber on April 27th, 2013 - 11:27pm P { margin-bottom: 0.08in; }
I’ve had a bit of a problem with the posting system and posting with just my email…. so I’ll let your Google integration find me this time. I was also on the road all week.. Sorry for the delay in response.
Please let me backtrack a bit and see if I can answer the original question better than I did before.
f5 creates TMOS (Traffic Management Operating System) which indeed is the operating system which will run on various BIG-IP platforms. These BIG-IP platforms are either combinations of specific hardware components or are specific virtualisation environments for our virtual editions. Be it hardware or a virtualisation environment, the distinguishing aspect is that they can run some version of TMOS. If you would like to see a specific platform’s designation, cat /PLATFORM on any TMOS instance and you will see what TMOS’ HAL functionality discovered.
As an example of what makes up a BIG-IP platform, if TMOS runs on hardware which contained our older network processors (the PVA2 and PVA10 ASICS) and a packet-by-packet virtual service is provisioned,connection management offloading can happen. (these are virtual services with fastL4 profile which do not use the standard dual TCP stack, session level, presentation level, or application level processing modules) If the same virtual service was provisioned on a BIG-IP platform without the PVA processors, the packet-by-packet service would be handled in software (it’s still 6x faster than if you need full application layer processing). If you run TMOS on a BIG-IP platform with specific TLS/SSL processors on it, then you can offload session key generation and bulk crypto for TLS/SSL session/presentation layer processing. If the same service was provisioned on a BIG-IP platform without the TLS/SSL processors, the service would run in software. The same goes for compression offloading.. or any other thing we learn to offload to hardware. The TMOS environment means you can interact with the management and control services the same no matter if you are dealing with an all software implementation or hardware offloading on a 48 core VIPRION cluster. In that sense TMOS is very much f5’s operating system and it abstracts many BIG-IP platform details.
On specific f5 BIG-IP platforms, merchant silicon switch fabrics are utilised. On others, newer, BIG-IP platforms, network processing NICs are used. In virtualised environments we take advantage of para-vitalisation interfaces. In still other virtualisation environments we depend on fully virtualised NICs. It all depends on the performance needed. The good news is we can take you from our software only BIG-IP platforms to multiple hundred Gbps BIG-IP hardware platforms and they all take the same virtual service definitions at L4-L7.
The way f5 can scale you from software to huge distributed clusters is because we constructed our TMOS technology to scale-out all the way back from v9 days (2004). In v9 days, we shipped our first platforms which had hardware ‘flow’ disaggregation/re-aggregation functionality. This allowed us to perform cluster multi-processing (CMP) in multiple TMMs (traffic management microkernel) all process traffic. This technology gets really interesting when we let the dis-aggregation/aggregation span across hardware blades in a chassis. We call that technology VIPRION.
Today, on certain BIG-IP platforms, we use FPGAs to perform the disaggregation/re-aggregation functionality, do assisted mode packet-by-packet offloading (ePVA), do TCP SYN cookie checks, and perform other interesting security tricks. What we do in FPGAs continues to grow as we need it to. If you buy a BIG-IP platform with FPGAs, you can take advantages of those offload services, but you really don’t have to know they are there to take advantages of them because TMOS detects them and utilizes them. That’s the joy of TMOS. In the future when generalised processing support massive parallelism with TMM without heating the planet too much, we might choose the software flexibility of that technology over FPGAs. TMOS will let us do that. Either way we keep growing, scaling, and reaching out to new technologies. We have lived through more than one generational sway of the technology.
Now for BIG-IP non-platform products…
TMOS includes software and services which can perform many different feature aspects. f5 bundles those features into licensed BIG-IP products. If you want to see what TMOS features you are licensed to use on any given BIG-IP platform, look at the enabled list in the /config/bigip.licence file. Any given BIG-IP product is the packaging of TMOS features.
BIG-IP Local Traffic Manager (LTM) is the TMOS feature bundle targeted towards in-datacenter ADC functionality.
BIG-IP Global Traffic Manager (GTM) is the TMOS feature bundle targeted towards multi-datacenter global traffic management.
Other BIG-IP TMOS feature bundles include:
BIG-IP ASM – Application Security Manager, our WAF feature set. BIG-IP APM – Application Policy Manager, our multifaceted SSL VPN and Identity solutions BIG-IP AFM – Application Firewall Manager, our network firewalling solutions.
We have more… ….
Again.. BIG-IP products are TMOS feature bundles which cater to specific traffic management needs.
You can decided to run multiple feature bundles (which can overlap!), on the same BIG-IP platform. So you can have BIG-IP LTM+GTM+APM all on the same BIG-IP platform. f5 is an engineering company, so we might tell you that reasonably you should not run every TMOS features unless the BIG-IP platform has enough processing, RAM, and disk I/O to handle it. That’s where we differ for others in this space. We get beat up over this by our competition, but our customers and support division like things that work. (Frankly, most of our competition call out things as ‘product’ that we throw into our base product, like HTTP caching. When f5 differentiates a product by a feature bundle, there is a compelling service reason to do so.)
In summary:
BIG-IP platforms = something that can run TMOS. I guess the term ‘BIG-IP switch’ can apply because there is no instance of TMOS that does not perform L2 frame forwarding.
BIG-IP products = some licensed bundle of TMOS features which is suited to a specific traffic management needs.
I wanted to comment on the idea of a data plane = forwarding plane. f5 data plane can be configured to process traffic including:
L2 Frame forwarding = traditional and opaque bridging L3 Packet forwarding = routing, NAT, SNAT L4 Connection management = packet-by-packet processing or dual stack connection processing L5 Session management = session offloading and multiplexing L6 Presentation management = encryption/decryption, serialisation/deserialisation, anything an iRule can do with scan commands L7 Application message management = message by message load balancing, message content manipulation, etc.
The great sauce at f5 is the programmable data plane. It’s far more than a forwarding fabric or a regex packet engine (PE).
Our control plane services are actually well defined too. We take pleasure in our interaction with dynamic routing, our highly functional SOD fail-over services, our device service group clustering, our message based provisioning services, and a great control plane integration between our local and global traffic mechanisms called iQuery. All are part of TMOS.
Our management plane services, including our GUI client, our TMSH programmable shell, our SNMP agents, our iControl API services (which we have had for years), and many new cool API features we are releasing this year, all make us proud. Bring on the automation and devop crowds.
You did comment on specific BIG-IP platforms having different functionality for management. That is true… On older platforms we had a dedicated microprocessor running its own Linux kernel called the SCP (system control processor). It had its own SSH stack and could be addresses for lights out management console access. In newer BIG-IP platforms we improved on the older SCP design with a new microprocessor we called the AOM (always on management). Appreciate that sometime this level of lights out management does not make sense for a specific BIG-IP platform, so we don’t include it. Ask your account engineer for the specifications of any BIG-IP platform you buy!
I hope this helps and does not just add to confusion ..
John
Reply What Lies Beneath on April 29th, 2013 - 8:08pm Hey John. Thanks again for your time and effort. There’s not too much new here (for me) but I appreciate the clarifications and I have learnt something. Just to make the point to others regarding some vendors, I’m happy to see that F5 continue to hold a very mature stance where blogs, social media and their staff are concerned.
I’ll update the post and diagram shortly to accomodate some of this. Unfortunately, I’m still left wanting in some areas but the list is much smaller now so we’re getting there. So, my last few queries;
1) Is ePVA software or not? Is this the FastL4 feature in VE (software), done using FPGAs in hardware or both dependent on platform? (assuming the latest hardware/software)
2) I’d still consider LTM and other modules as exposing specific TMM features. However, as management features such as the GUI are also affected when licensed perhaps TMOS modules would be a better term? Surely BIG-IP modules is misleading as we’re talking about exposing software features (even if hardware support is involved).
I’d love to post about BIG-IQ, if there’s any help you can give me there, let me know.
I’ll post an updated diagram regarding ‘planes’ soon, if there’s anything wrong with it please let me know.
Thanks again, I’m sure a fair number of people have found this very useful.
Reply John Gruber on May 5th, 2013 - 2:09pm Some answers…
ePVA performs ‘assisted’ mode connection offloading. That means the initial qualification and policy for the connection, like load balancing decision, is done by a software TMM, but then subsequent packets associated with the connection are fully handled in hardware.
Using the term TMOS feature is absolutely better. There are TMOS features which involve non-TMM aspects as you noted. Again, BIG-IP is the marketing term for a TMOS feature bundle. The terminology for any product bundling is controlled by our product management and marketing departments. (my way of dodging the blame for bad terms ) They align with what a customer can order.
The BIG-IQ family of management services are centralised and don’t carry data plane traffic. That’s what makes them different from BIG-IP. The BIG-IQ moniker is there to differentiate from the BIG-IP TMOS feature bundles as they are stand alone management products which interact with BIG-IPs and other systems through various interfaces. BIG-IQ supports interactions with management clients through its northbound APIs interfaces. BIG-IQ calls other management APIs (like AWS APIs) through its eastbound interfaces. BIG-IQ supports management of BIG-IP systems through its southbound interfaces. Beyond that we need to get into specifics about the various BIG-IQ module and their functions to detail the interactions.
Let me look at your diagram a bit more and see if there is anything I can add.
Thanks everyone!
What Lies Beneath on May 7th, 2013 - 2:15pm Thanks once again John. I hate to keep going in this format but can you clarify the difference between ePVA and PVA? I understand that ePVA isn’t all hardware as PVA seems to be but that sounds like a step back, not forwards. Does ePVA allow for hardware offload above L4 perhaps or have some other advantage(s) over PVA?
Jeanne Lewis on October 8th, 2014 - 8:20pm Here is a link to our new F5 Big-IP TMOS Operations Guide. http://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/f5-tmos-operations-guide-1-0.html
Reply Harry on December 19th, 2015 - 10:22am Thanks John and Lewis, you guys have shed some great knowledge regarding TMOS, was in a fix could not understand TMOS properly until saw this great article . Thanks !!!!
Reply Alberto on February 22nd, 2016 - 9:11pm Great article with good posts, it gives a good idea how TMOS works internally. Thanks!!!
Reply Steven Iveson on February 23rd, 2016 - 9:28am Many thanks for taking the time to comment Alberto. Cheers
Reply Davis on April 22nd, 2016 - 5:30am Thank you Steven for being such a valuable contributor and person to the F5 community!
I am wondering if you have a clear understanding of what features are native to TMOS and what other features are only available through add-on modules on the TMOS?
For example, (correct me if I am wrong), the IP Intelligence feature (from Web Root?) is part of TMOS and every appliance regardless of Good Better Best bundle or model series can utilise this feature while URL filtering is a module-specific feature found only if you have the Secure Web Gateway module.
Thanks!
Reply Steven Iveson on April 22nd, 2016 - 10:35am Thanks Davis.
Things may have changed as I’ve not been as active as I’d like recently but my general understanding is that the majority of features are built into TMOS itself and simply not exposed for use until licensed. The modules may have supporting requirements such as database or GUI components but ultimately, the functionality they rely upon exists within TMOS.
As to what each module exposes, well, the website and manuals tell you all you need to know in nearly all cases.
Hope that helps. Cheers
Reply Frank Cerniglia on April 10th, 2018 - 1:41am HI for some reason I can not find TMSH on my F5 BigIP 1600 f? please advise me my email is [redacted]
Reply Steven Iveson on April 11th, 2018 - 9:54am I’ve dropped you an email Frank.
Reply komatineni on January 21st, 2019 - 1:50am What a relief to find this. Trying my best going through few presentations and user guides but this article (& the comments) summarizes in a nutshell. Thanks for saving few hours of reading.
Reply Steven Iveson on January 21st, 2019 - 9:37am Thanks, you’re welcome.
Reply Vishal Kadam on December 25th, 2019 - 2:00pm Thanks all for this article
Reply Brijesh on August 8th, 2019 - 12:59pm Hi Steven
Nice article. Just wandering where IMISH will fall here ?
Many Thanks,
Reply Steven Iveson on August 9th, 2019 - 7:12am Thanks Brijesh. IMISH Is part of the HMS (specifically part of ZebOS/IP Infusion).
Reply