{"id":3883,"date":"2011-01-25T00:41:18","date_gmt":"2011-01-25T00:41:18","guid":{"rendered":"http:\/\/blog.open-e.com\/?p=3883"},"modified":"2025-07-03T08:30:21","modified_gmt":"2025-07-03T08:30:21","slug":"ping-node-explained","status":"publish","type":"post","link":"https:\/\/www.open-e.com\/blog\/ping-node-explained\/","title":{"rendered":"Ping-Node \u2013 explained."},"content":{"rendered":"<p>\t\t\t\tWe wanted to provide you an in-depth explanation of the Ping-Node offering clarifications on its function and usage.<\/p>\n<p>Incorrect Ping-Node configuration and functionality can cause problems with HA clusters.<\/p>\n<p>The importance of this post is critical for a proper setup with the iSCSI Failover.<\/p>\n<p>Why do we need a ping-node or rather ping-nodes?<\/p>\n<p>DSS V6 iSCSI Failover (and soon <a title=\"nas-nfs failover\" href=\"http:\/\/www.open-e.com\/solutions\/nas-nfs-failover\/\">NFS Failover<\/a>) uses a heartbeat to check the Primary and Secondary hosts to each other.<\/p>\n<p>We require at least 2 NICs configured for the heartbeat. Additionally we strongly recommend using a direct crossover or what is called a point-to-point connection for the<br \/>\nVolume Replication. This path must be enabled for the heartbeat as well. With a direct connection both hosts can communicate even during a switch failure and you save on 2 switch ports.<\/p>\n<p>So, what would happen if both the Primary and Secondary hosts are functioning well and are able to communicate to each other (i.e. via mentioned direct connection) but the storage client has lost network connection to the Primary host?<\/p>\n<p>For example the switch port or NIC in that path has a problem.<\/p>\n<p>The heartbeat will NOT decide about the failover procedures because both hosts \u201cthink\u201d are OK, but still the storage client cannot access the storage. This is where the Ping Node comes into play and prevents such situations. The cluster manager realizes that the Primary host has lost access to the Ping-Node(s) but the Secondary host has access. So the cluster manager executes failover. Because of lost access to a single Ping Node will cause a failover, so it is strongly recommended to use at least 2 Ping-Nodes for every network segment which will need a Ping-Node. This will minimize failover events in case of an unreliable Ping-Node.<\/p>\n<p>Now, which network segment will need the Ping-Node(s) for monitoring? For sure not ever NIC but only those network paths which are connected to storage clients need to be monitored with Ping-Node(s).<\/p>\n<p>The best explanation can be outlined below with examples. So let\u2019s consider the first example with bonding.<\/p>\n<p><a href=\"http:\/\/blog.open-e.com\/wp-content\/uploads\/2011\/01\/Bonding.pdf\">Failover with Bonding &#8211; Click Here<\/a><\/p>\n<p>Here the storage clients (VMware, XenServer, Windows) will be connected via a bonding network segment so the Ping-Nodes are in the subnet 192.168.1.x. So a minimum of one PingNode is required, but we recommend at least 2.<\/p>\n<p><a href=\"http:\/\/blog.open-e.com\/wp-content\/uploads\/2011\/01\/Mpio.pdf\">Failover with Mpio &#8211; Click Here<\/a><\/p>\n<p>In this case storage clients will be connected via both network segment paths, so Ping-Nodes are in subnet 192.168.1.x and 192.168.2.x<\/p>\n<p>So a minimum of two ping-nodes are required, but we recommend at least 4 due to the Multipath.\t\t<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We wanted to provide you an in-depth explanation of the Ping-Node offering clarifications on its function and usage. Incorrect Ping-Node configuration and functionality can cause problems with HA clusters. The&nbsp;&#8230;<\/p>\n","protected":false},"author":2,"featured_media":55796,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[777,765],"tags":[101,257,309,410,500,501,715,739,747],"class_list":["post-3883","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-iscsi","category-open-e-legacy-products","tag-bonding","tag-failover","tag-heartbeat","tag-multipath","tag-ping-node","tag-pingnode","tag-vmware","tag-windows","tag-xenserver"],"acf":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/posts\/3883","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/comments?post=3883"}],"version-history":[{"count":1,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/posts\/3883\/revisions"}],"predecessor-version":[{"id":55151,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/posts\/3883\/revisions\/55151"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/media\/55796"}],"wp:attachment":[{"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/media?parent=3883"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/categories?post=3883"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.open-e.com\/blog\/wp-json\/wp\/v2\/tags?post=3883"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}