Anytime Help Center

Comprehensive explanation of AMX Netlinx Master-to-Master configuration

Rating



Please see the attachment 919-Master to Master unveiled for a comprehensive explanation of Netlinx master to master communications.


This document explains the mystery of master-to-master systems, and the information that must be understood to successfully deploy master-to-master systems. Most master-to-master systems can be successfully deployed using “route mode direct” and the appropriate topology. These two items are explained in detail in the subsections “Master routing” and “Topologies”

Master-to-Master

The functionality of master-to-master (M2M) consists of master routing and intersystem control. Master routing is the ability to route messages to any other master or device and is the foundation of all M2M functionality. Intersystem control allows a master, or its NetLinx program, to control and get status of any other device (or master) that is connected to any other master.

Master routing By design 

All NetLinx masters do not automatically make a M2M connection with other NetLinx masters by virtue of being on the same network. The connection between them must be made intentionally by adding them to a list. This connection list is called the “URL List”. The URL List on the NetLinx master is used to force the master to initiate a TCP connection to the specified URL/IP address*. Therefore, the first step in assembling a M2M system is to set unique system numbers on each master. Valid system numbers are 1 to 65535, system 0 is a wildcard referring to the local system and is used within DEFINE_DEVICE and NetLinx Studio connections. The next step is to configure the URL List in either of the masters, but not both, to point to the other master. 

​Once the systems are connected to each other they exchange routing information such that each master will learn about all the masters connected to each other. The implementation of master routing primarily involves the communication of routing tables between masters. The routing table is built using the entries within the local URL List, the DPS entries in the DEFINE_DEVICE section of the code, and from the routing tables exchanged between connected masters. Routing tables are exchanged between masters upon their initial connection and updates to the routing tables are exchanged periodically. Route table transmission has a certain amount of randomization built in to prevent flooding the network with routing table transmissions when a master reports online/offline. Each master in a network will add a minor random delay (1-5 seconds) so that they don’t all transmit at the same time. 

There is no fixed limit on the number of entries in a routing table. The number of routes is dependent on the number of systems in the network for which there is no set limit. The only limit is the memory space in each master to maintain all of the system information of the various systems throughout the network. 

There are two route modes in which masters can be configured to share their routing table. The first and default is “normal”, in this mode the master will share the entire routing table built from all interconnected masters. The second is “direct”; in this mode the master will share a routing table that only contains itself. When using “direct” mode the master will only connect with the masters that are one hop away. As a diagnostic aid, the "show route" command can be issued from a telnet session to show paths to other masters.

Design Considerations 

​When designing a system that will utilize the M2M functionality, there are multiple points to consider. 

The first thing to consider is the reason for using M2M. The most common reasons are: Expansion of a system to add device ports. Expansion of a system to an area the main system cannot reach. Sharing of processing load. Standalone capability of system areas. Isolation of areas for security reasons. Dedicate a master to common/shared devices located in a central location. Etc… A combination of the above. 

The second thing to consider is the code requirements for each master: Masters that are only being used to add device ports must have an empty “.tkn” file loaded, otherwise the devices will not be accessible. Masters that are used to share the processing load or are intended to provide standalone capability must define its local devices and the specific remote devices needed on the other masters in DEFINE_DEVICE. Ports on remote devices declared in DEFINE_DEVICE must exist! (Example: adding touch panel port 80 when the panel file that has been loaded only specifies 20 causes errors in the negotiation) Events must be written for remote devices for the program to hear them. Writing events causes the master to negotiate for the transmission of these events over M2M (as reflected in SHOW NOTIFY) 

The third thing to consider is the connection topology: Is there a main master who all other masters must connect with? Do all the masters need to talk to each other? Or is there some combination of the above? 

Constraints 

To properly configure the URL Lists in a multi-master system, there must be an understanding of 3 hard constraints. The first constraint is the maximum number of 200 entries in a URL List. This limit although important will most likely never pertain as the second constraint is far more relevant. 

The second constraint is the maximum number of 250 simultaneous TCP/IP connections supported by a single master. The maximum number of simultaneous TCP/IP ICSP (NetLinx device) connections supported by a single master is 200.  The top ~25 of the remaining 50 are intended to be used for internal services i.e. ftp, telnet, http, etc… The next 25 are intended to be used for IP connections used in the NetLinx code via IP_CLIENT_OPEN, IP_SERVER_OPEN, and Duet modules. If there are more than 25 IP connections made from within the code they will utilize the required number of remaining 200 IP sockets which reduces the number of available socket connections and subsequently the number of available NetLinx device connections which will reduce the number of available entries within the URL List. 

The third constraint is the routing metric limit of 15 usable hops on the topology of the interconnected NetLinx masters. While the limit of 15 hops may seem very limiting, this is not really the case if you carefully design the topology. 

-more-​

Downloads

Product

 

Topic

Configuring

Related Articles

Last modified at 4/11/2023 1:05 PM by PRO Knowledge Base
Top