2 min read

Imunify360 custom SSH port: stop blocking your admins

Tell Imunify360 about your non-22 SSH port so its brute-force protection stops blocking legitimate admin connections, with a verification command.

Imunify360 custom SSH port: stop blocking your own admin sessions

You moved sshd off port 22 and now your own engineers are getting banned by Imunify360. The fix is two minutes; the documentation is buried under three FAQ pages on a vendor site that does not rank for the obvious search.

The symptom

# From the engineer's laptop, three failed logins in:
ssh -p 15600 admin@cpanel-host
# Connection closed by 198.51.100.10 port 15600
# On the server:
imunify360-agent ip-list local --filter ip=198.51.100.10
# +----------------+----------+--------+
# | IP             | Status   | Source |
# +----------------+----------+--------+
# | 198.51.100.10  | BLOCKED  | OSSEC  |
# +----------------+----------+--------+

Imunify360's OSSEC integration treats every failed SSH attempt as a brute-force signal. If sshd is listening on a port Imunify360 does not know about, the agent does not correlate the failures to SSH; it just counts connection attempts to "an unknown port" and bans.

The fix

Edit /etc/sysconfig/imunify360/imunify360.config and set the SSH_PORT value under the [OSSEC] section to your real port:

[OSSEC]
SSH_PORT = 15600

Restart the agent so OSSEC reloads the config:

systemctl restart imunify360-agent

Verify the agent picked it up:

imunify360-agent param show OSSEC.SSH_PORT
# OSSEC.SSH_PORT = 15600

Cleanup: unban your own admin IP

imunify360-agent ip-list local --remove 198.51.100.10
imunify360-agent ip-list white --add 198.51.100.10 \
  --comment "ops admin laptop, do not block"

A whitelist on the admin IP stops this from happening again the next time someone misclicks a password. Pair it with key-only auth on the SSH side so the whitelist does not become the only line of defence.

Why this is not documented well

Two reasons. First, Imunify360's default config assumes the cPanel norm of port 22 plus key-only, so the SSH_PORT knob looks like a no-op until you change sshd. Second, the OSSEC subsystem inside Imunify360 is loosely coupled. It has its own rules directory at /var/imunify360/files/sigs/ossec/ and the config above is one of the few user-tunable values that bridges them.

Sanity check after the change

# Trigger a deliberate failure from the admin laptop, then watch:
tail -F /var/log/imunify360/console.log | grep -i ossec

You should see Imunify360 logging the failure as sshd_brute_force (known SSH event) and not as unknown_port_scan (the misclassified signal). If you still see the latter, your SSH_PORT is wrong or the agent did not reload.

This bites every team that hardens SSH past the defaults. Long versions:

How ServerGuard uses this

When our SSH-hardening use case moves sshd to a non-default port, it also pushes the matching SSH_PORT change into Imunify360 and restarts the agent in the same approval window. Two changes, one review.

Share this post

  • 14 min read

    CSF, lfd, and Imunify360: why your firewall is killing itself

    CSF, lfd, and Imunify360: why your firewall is killing itself The page came in at 03:14. A cPanel node on had stopped accepting new connections to wp-login on three sites, then started accepting them again, then stopped. The firewall was al

  • 14 min read

    Locked out of cPanel SSH: VNC, iptables, and the way back in

    Locked out of cPanel SSH: VNC, iptables, and the way back in The terminal hangs. You hit Enter again. Nothing. You try a different SSH client. Nothing. You try from your phone's hotspot, on a different ISP, with a different public IP, and S

  • 13 min read

    SSH brute force on cPanel: the 8,127-attempt night and the fix

    SSH brute force on cPanel: the 8,127-attempt night and the fix The first alert landed at 02:14. Five failed root logins from a single address in Bulgaria, blocked at the 5/300s threshold, business as usual. By 02:31 the inbox had nine more