LGTM Enterprise 1.20

Adding and removing workers

Adding and removing worker hosts, and the worker daemons they run, are system-administrator-level tasks. Before you change the number of worker hosts and worker daemons available for LGTM analysis you need to consider:

  • You do not need to stop LGTM Enterprise when you change the number of work pool hosts or worker daemons.

Editing the number of worker daemons running on existing hosts

  1. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the workers block.
  2. In the hosts section, edit the number of worker daemons running on each host as required:
    • general—specifies the number of worker daemons for general tasks, including build/analysis.
    • query—specifies the number of worker daemons for query console analysis requests. Until you have many active query writers, you won't need many daemons of this type.
    • on_demand—specifies the number of worker daemons for jobs where a user is likely to be waiting for results (excluding query jobs). When teams start using LGTM's code review integration with their projects—for example, automatic analysis of GitHub pull requests—you may want to specify some workers of this type. This should reduce time it takes to post analysis results back to the code review system.

    workers:

      hosts:

      - hostname: "hostname"

        general: n

        query: n

        on_demand: n

      - hostname: "hostname2"

        ...

  3. Save your changes then generate and deploy updated service configurations.

When you change the number of worker daemons running on an existing worker host, the existing configuration is replaced by the new service configuration. Any jobs that were currently running are interrupted and will not be completed. However, when the services for the worker daemons start and register with LGTM, any interrupted jobs are automatically restarted.

Adding new hosts to the work pool

  1. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the workers block.
  2. In the hosts section, add a new hostname subsection for each new host and specify the new host names.
  3. For each new host that runs Windows, specify platform: "windows".
  4. For each new host name, define the number of general, query, and/or on_demand worker daemons to run (see example).
  5. Optionally, if the generally defined location for temporary files is not suitable for a host, specify an alternative location using the temp_path property.
  6. Save your changes then generate and deploy updated service configurations.

Example

For example, in the extract from a cluster configuration file below, two workers already run on the same server as the control pool services (lgtm-host). The highlighted text shows the addition of a further two general workers running on an additional server called linux-workers-host1 and two running on an additional server called windows-workers-host1. A different temporary directory is set for the new Windows host because the general location is unsuitable.

workers:

  hosts:

  - hostname: "lgtm-host"

    general: 1

    query: 1

  - hostname: "linux-workers-host1"

    general: 2

  - hostname: "windows-workers-host1"

    platform: "windows"

    general: 2

    temp_path: "C:\\temp\\"

  temp_path: "/var/cache/lgtm/workers/"

Removing hosts from the work pool

  1. On the host that you want to remove from the work pool, shut down and remove all LGTM services.
    • Linux: sudo lgtm-down (to stop and remove services)
    • Windows: lgtm-down.bat (to stop services) followed by lgtm-uninstall-services.bat (to remove services)
  2. On the coordinator server, edit the lgtm-cluster-config.yml file and locate the hostname section for this host.
  3. Delete the entire section that specifies the properties for the host. This may include any of the following properties: general, query, on_demand, temp_path, platform, and labels (for example, see the different hostname sections in the example above).
  4. Save your changes. There is no need to deploy these changes to the machines in the cluster.

Any jobs that are in progress on the host when you shut down the LGTM services are interrupted and will not be completed. These jobs will be picked automatically up by other worker daemons, when they request their next tasks from the queue.

Verify the changes

You can verify that your changes have been applied by going to the Infrastructure page in the administration section of LGTM Enterprise. The number of Current workers is automatically updated to reflect any increase in the number of worker daemons available to LGTM. You can click the Worker management tab to see the workers that are registered on each worker host.

If you have reduced the number of worker daemons available, click the Clear all registrations button to refresh the work pool information. This clears all outdated information. All worker daemons automatically re-register when they complete their current task.

Related topicsRelated topics