general config, naming convention and port calculation methods @ Terraform Custom Provider For MongoDB OPS Manager

general config, naming convention and port calculation methods @ Terraform Custom Provider For MongoDB OPS Manager

In this article about the Terraform Custom Provider For MongoDB OPS Manager, I'm going to provide more details about general config, naming convention and port calculation methods.

generalConfig.json

configPath

└── common/

└── general/

└── generalConfig.json

This file is required, at the generalConfig.json we can set a few parameters, such as:

  • logPath: path to store log files

Parameters related with a change of the automation config:

  • waitRounds : number o loops to wait that all processes reach the goal state.
  • waitSeconds : seconds between one check of the state and the next.

Parameters related with user password generation:

  • pwdLenght: lenght of password.
  • pwdHasSpecialChars: password can contains special chars or not.

Example:

{
    "logPath": "/tmp/customopsmanager/",
    "waitRounds" : 20,
    "waitSeconds": 10,
    "pwdLenght": 18,
    "pwdHasSpecialChars": true
}

namingConvention.json

configPath

└── common/

└── replicaSet

└── namingConvention.json

Define parameter about naming convention. Example:

{
    "prefix": "MOTOGP-",
    "case": "upper",
    "sufix": "-RACE"
}

In this example, for a sid ITALY, the name of replica set of would be MOTOGP-ITALY-RACE, *if the prefix of environment is empty. Don't worry, and I'm going to elaborate it in the sequence.

environmentOrder.json

configPath

└── common/

└── replicaSet

└── environmentOrder.json

Indicates the order of the environments in which each Replica Set should be created. It allows a Replica Set having the same port number of another one created in a previous environment, by defining a basesid. It's used when the checkCrossEnvironment is true. This file is required.

Example:

[
    {
        "ID": 0,
        "Name": "dev",
        "Prefix": "D"
    },
    {
        "ID": 1,
        "Name": "test",
        "Prefix": "T"
    },
    {
        "ID": 2,
        "Name": "homolog",
        "Prefix": "H"
    },
    {
        "ID": 3,
        "Name": "prod",
        "Prefix": "P"
    },
    {
        "ID": 4,
        "Name": "review",
        "Prefix": ""
    }
]j

In this file the basesid is equal of the database in review environment, because the prefix is empty. Let's make an example for each environment, for a basesid: ITALY we have:

  • dev: {"sid": "DITALY", "replicaSetName": "MOTOGP-DITALY-RACE"}
  • test: {"sid": "TITALY", "replicaSetName": "MOTOGP-TITALY-RACE"}
  • prod: {"sid": "PITALY", "replicaSetName": "MOTOGP-PITALY-RACE"}
  • homolog: {"sid": "HITALY", "replicaSetName": "MOTOGP-HITALY-RACE"}
  • review: {"sid": "ITALY", "replicaSetName": "MOTOGP-ITALY-RACE"}

calculatePortConfig.json

configPath

└── common/

└── calculatePort/

└── calculatePortConfig.json

You can inform a port in the inventory or let the provider calculates one for you. This file contains configurantions about port calculation method. It's based on the principle that all nodes of Replica Set have the same port number. Also this file is required.

If the checkCrossEnvironment is enabled, the provider will search for a Replica Set in each environment based on the naming convention and the order of environments, if it's found, it will get the same port number. Else, the provider will calculate a port number by using one of the methods.

Example:

{
    "method": "hash",
    "portRangeStart": 20000,
    "portRangeEnd": 40000,
    "checkCrossEnvironment": true,
    "allowDuplicatePort": false
}

The methods for calculating a port number are hash, nextOfProject, nextOfEnvironment. I'm going to describe each method:

  • hash: it's calculated a hash with the basesid of the Replica Set, with the hash, the provider picks a port number between portRangeStart and portRangeEnd.
  • nextOfProject: get the highest port number of the project and adds one.
  • nextOfEnvironment: get the highest port number of the environment and adds one.

As I said in the previous articles, this custom provider for OPS Manager was designed to be flexible and stable, and as you can see each aspect was analyzed in detail to provide a robust solution.

That's all for today, and stay tuned, because there will be more details in future articles.

By Danilo Pala, 31/10/2025