Managed Services: Kaseya v6.5 Database Growth

Kaseya is a widely used Remote Monitoring and Management tool (“RMM”). It’s main offerings include remote control, patching,  backup, auditing, and alerting on any number of workstations or servers in the same or separate networks.

This post is for those frustrated Kaseya admins wih database growth issues and assumes a basic familiarity with Kaseya database structure and SQL knowledge. The information here is a bit legacy, and I’ve been out of the Kaseya game for over a year so my memory may be fuzzy, but hopefully this post is still helpful to someone.

A Kaseya database I managed hovered around 100GB for approximately 3500 endpoints. Average database size for a deployment of this number of endpoints is nowhere near this. Beginning with Kaseya 6.1 and continuing beyond, agent procedure logging is much more verbose. This is both good and bad, as verbose logs help you track down misfiring agent procedures, but if you have complex agent procedures with multiple if-statements, then each if-statement is processed and logged into a row in the database.

We had recurring scripts run multiple times a day on thousands of end-points — that generated a ton of noise that was then input into the KSubscribers database. This wasn’t ideal, as the growth would continue until the server ran out of space.

Diving into the KSubscribers database, it was very clear that the bulk of the data resided in the dbo.scriptlog table. Opening that table up showed me just how verbose Kaseya had become.

Here is what I did to resolve the issue:

  • Truncated the dbo.scriptlog table. Yes, I lost the scriptlog history, but the data was mostly useless in the state it was in anyway. I chose to truncate over delete as we had millions and millions of row of garbage data, and with each record being deleted being its own transaction, that would have caused our VSA to slow down for an unacceptable amount of time.
  • Shrunk the database. You need to shrink the database, as the database size does not get reduced simply because you removed data from it. Shrinking can be done on the fly without causing much of a performance hit.
  • Created a recurring T-SQL script to delete records in dbo.scriptlog older than 30 days. Yes, Kaseya supposedly has this functionality built in, but in our experience it never really worked.
  • Once 30 days hit, the data growth pretty much plateud.

Minimizing Growth due to Noisy Event Logs

It is very smart to capture as much data as you can, as you are most likely funneling alerts into your PSA from Kaseya and creating tickets out of them. If you aren’t, you should be. You don’t want to miss anything, so you crank up event logging within Kaseya. However, if you are too liberal with what you are capturing, you will quickly overload the VSA. This is what we ended up doing:

  • Logging Error, Warning, and Critical alerts only
  • Set retention to a period of 2 weeks
  • Created recurring SQL scripts to alert us on noisy (at risk) computers and servers – if a computer is throwing thousands of errors into the ntEventLog[xxxx] tables in Kaseya, then you want to have someone check it out, as along with unnecessarily growing your KSubscribers database, the computer probably has a technical issue that needs to be addressed.

Hope this helped.

All data and information provided on this site is for informational purposes only. The author(s) makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors or omissions in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

Leave a Comment

Your email address will not be published. Required fields are marked *