LGTM Enterprise 1.22.1

Adding and removing work pool workers

Monitoring usage and analysis progress may indicate that you need to alter the resources in LGTM's work pool.

Adding and removing worker hosts, and the worker daemons they run, are system-administrator-level tasks. Before you change the number of worker hosts and worker daemons available for LGTM analysis you need to consider:

You do not need to stop LGTM Enterprise when you change the number of work pool hosts or worker daemons.

Editing the number of worker daemons running on existing hosts

  1. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the workers block.

  2. In the hosts list, edit the type and number of worker daemons running on each host, as required:

    There are three type values:

    • GENERAL—specifies the number of worker daemons for general tasks, including the standard building and analysis of projects.
    • QUERY—specifies the number of worker daemons for query console analysis requests. Until you have many active query writers, you won't need many daemons of this type. The number required depends on the quantity, frequency, and complexity of queries submitted in the query console, and the quantity and size of projects that queries are run against.
    • ON_DEMAND—specifies the number of worker daemons for jobs where a user is likely to be waiting for results (excluding query jobs). When teams start using LGTM's code review integration with their projects—for example, automatic analysis of GitHub pull requests—you may want to specify some workers of this type. This should reduce time it takes to post analysis results back to the code review system.

    workers:

      hosts:

      - hostname: "hostname"

        specs:

        - type: "GENERAL"

          copies: n

        - type: "QUERY"

          copies: n

        - type: "ON_DEMAND"

          copies: n

      - hostname: "hostname2"

        ...

  3. Save your changes.
  4. Generate and then deploy the files for the new configuration. See the information starting from Generating files for the configuration.

When you change the number of worker daemons running on an existing worker host, the files generated for the new configuration replace the files for the previous configuration. Any jobs that were currently running are interrupted and will not be completed. However, when the services for the worker daemons start and register with LGTM, any interrupted jobs are automatically restarted.

Adding new hosts to the work pool

If both of the following statements are true for the worker host machine you want to add to the work pool:

  1. The machine is running a different platform to the coordinator machine (for example Windows, where the coordinator machine is running Red Hat)
  2. The platform used by the machine is not used on any existing worker hosts (for example, all the other worker hosts run Red Hat or Debian)

check that the lgtm-<version>/lgtm directory on the coordinator machine (typically below the $HOME/lgtm-releases directory of the user who installed/upgraded LGTM Enterprise) contains the appropriate worker package for the new platform:

  • lgtm-worker-<version>_all.deb for worker hosts running Debian and Ubuntu
  • lgtm-worker-<version>.noarch.rpm for worker hosts running Red Hat and CentOS
  • lgtm-worker_<version>.msi for worker hosts running Windows

Adding a new worker host machine

  1. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the workers block.
  2. In the hosts section, add a new hostname subsection for each new host and specify the new host names.
  3. For each new host that runs Windows, specify platform: "windows".
  4. For each new host name, define the number of general, query, and/or on_demand worker daemons to run (see example).
  5. Optionally, if the generally defined location for temporary files is not suitable for a host, specify an alternative location using the temp_path property.
  6. Save your changes.
  7. Generate and then deploy the files for the new configuration. See the information starting from Generating files for the configuration.

Example

For example, in the extract from a cluster configuration file below, two workers already run on the same server as the control pool services (lgtm-host). The highlighted text shows the addition of a further two general workers running on an additional server called linux-workers-host1 and two running on an additional server called windows-workers-host1. A different temporary directory is set for the new Windows host because the general location is unsuitable.

workers:

  hosts:

  - hostname: "lgtm-host"

    specs:

    - type: "GENERAL"

      copies: 1

    - type: "QUERY"

      copies: 1

  - hostname: "linux-workers-host1"

    specs:

    - type: "GENERAL"

      copies: 2

  - hostname: "windows-workers-host1"

    platform: "windows"

    temp_path: "C:\\temp\\"

    specs:

    - type: "GENERAL"

      copies: 2

  temp_path: "/var/cache/lgtm/workers/"

Removing hosts from the work pool

  1. On the host that you want to remove from the work pool, shut down and remove all LGTM services.
    • Linux: sudo lgtm-down (to stop and remove services)
    • Windows: lgtm-down.bat (to stop services) followed by lgtm-uninstall-services.bat (to remove services)
  2. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the hostname section for this host.
  3. Delete the entire section that specifies the properties for the host. This may include any of the following properties: specs, type, copies, temp_path, platform, and labels (for example, see the different hostname sections in the example above).
  4. Save your changes. There is no need to deploy these changes to the machines in the cluster.

Any jobs that are in progress on the host when you shut down the LGTM services are interrupted and will not be completed. These jobs will be picked automatically up by other worker daemons, when they request their next tasks from the queue.

Verify the changes

You can verify that your changes have been applied by going to the Infrastructure page in the administration section of LGTM Enterprise. The number of Current workers is automatically updated to reflect any increase in the number of worker daemons available to LGTM. You can click the Worker management tab to see the workers that are registered on each worker host.

If you have reduced the number of worker daemons available, click the Clear all registrations button to refresh the work pool information. This clears all outdated information. All worker daemons automatically re-register when they complete their current task.

Alternatively, you can use the API to check the current number and status of workers. For more information, see Monitoring the health of the application.

Related topicsRelated topics