Solved – MYSQL refusing to start

I had upgraded my version of Mysql Server via the ports a few weeks back and usually ports is very good at addressing any compatibility issues bumping up the versions. A quick read of UPDATING had not flagged any real concerns, but on the reboot Mysql was steadfastly refusing to start. No obvious error messages where thrown and it just silently refused to budge.

Googling presented several solutions, but none palpable (wipe config and dbs and start again). However one suggestion was to check the {hostname}.err file contained within /var/db/mysql and lo and behold a quick TAIL {hostname}.err displayed the error that I have been missing. Namely that query_cache_type=0 and query_cache_size=0 had been deprecated starting with v8 and needed to be deleted from my.cnf file.

Fixing the my.cnf file and a quick service mysql-server start restored my Databases and we are back up and running.

Updating Ruby via Ports

Oh I do hate /usr/ports/UPDATING when they refer you to an entry 3 years prior on how to update the current version of a major revision bump and then you need to dig out and correct the numbers to make it work. So I am just going to down my editing here so I can find more easily.

  If you use portmaster, install new ruby, then rebuild all ports that depend on ruby:

cd /usr/ports/lang/ruby31 and Make install

  # portmaster -o lang/ruby31 lang/ruby30

  # portmaster -R -r ruby-3.1

Hardening the Server

For many months, the nightly security emails have been warning that db5 and gcc9 were out of date and no longer supported. Numerous attempts to remove just threw up package dependencies, so today I set out to tackle them.

gcc9 proved to be the easiest and whatever previous package had been holding it back had clearly gone and a simple sudo pkg delete gcc9 did not throw up any other package dependencies and soon went.

Not so for db5.

Webalizer, apr and apache24 still insisted on db5. All relatively mainstream packages but all still using db5. Several Googles showed others have had similar problems ever since it was first mentioned as deprecated but the amount of error messages had decreased since July,

Adding dbd=18 to make.conf as the default db version seems to have resolved the problem after a Make Deinstall Clean on apr1, apache24 and the webalizer ports. All 3 then re-built nicely and did not call in db5 which previous attempts to remove and rebuild have done.

Hopefully now after fixing PHP 8.1 and now db5 plus gcc9 the server will live on for another year as a test-bed and platform for my ramblings.

Energy Bills

With the current cost of living crisis in the UK, driven by the increase in energy costs I am limiting the amount of hours I leave the server up and online these days. Certainly over the summer when I had little time to dabble with the various services that I run, its make good economic sense to shut the server down if I am not going to be working on its for a few days.

Therefore expect to see erratic uptime for this website/server, unless I take the plunge and migrate all the content to one of the cloud accounts I am already paying for. The downside will be the loss of local services and a properly offline OneDrive copy that I use for backup purposes.

PHP 8.x upgraded

With the pending retirement of PHP 7.3 the last thing on last months list of things to do was to update PHP to a supported version.

re-building from ports is not easy, but is straight forward. First generate a list of all your installed php packages and save safely where you can access it. Then use pkg delete -f to remove each package on your list. Not forgetting mod_php73 and then find the version of php you want to replace with in /usr/ports/lang and then install php80 and php80_extensions.

I had planned to jump straight to PHP 8.1, but the php81-dom extension is no longer available to install. Ditto php81-hash and php81-json. Thankfully I didn’t need -hash nor json and rolling back to PHP8.0 sufficed.

60 minutes later after manually deinstalling and then installing I had a working system. Well, almost, in my haste to reboot the server, I had forgotten to install mod_php80 so on the reboot Apache failed to start as it did not recognise some of the php injects in the httpd.conf. Rem’ing the errant comments out did not work either as when Apache now started, it could not interpret the .php files and dumped them straight to the screen. Then the penny dropped and a quick install of mod_php80 and reboot finally made Apache happy and everything working again.

Why reboot and not just stop/start Apache et al. Well its a small test rig and not running any production grade services so a complete clear out of the memory from any residual horlicks I have made in past configs or port building means I give it the best hope of running smoothly until the next big upgrade.

Upgrade decided

With all the furore over log4J and JNDI decided it was time to take the plunge and force the update thru. Although I wasn’t running Log4J there are probably a heap of software out there that will need to be bumped to the latest and greatest, so it seemed sensible to opt for Release 14, given its now on the point 4 update so all the major issues should be ironed out.

Server is mid-update as I type and is going smoothly,

Biggest issue was getting all the ports up to date before I started as a couple of Python related ports where being stubborn, looking for a packaging update to >20, but Postmaster could not seem to find the required dependency. Turns out installing or updating py-packaging made the necessary fix and now all the ports are building nicely I can attempt the OS update as everything will need to be rebuilt again from ports.

Upgrade Dilemma

While logging into the box, it politely reminded me that FreeBSD 12.2 will be going EOS soon and to upgrade within the next 2 months. “No Problem” I thought, as this is mostly a test rig and hobby box I normally go for the most recent. However this box is now rather long in the tooth and only a dual core M86 ThinkStation that was originally built on FreeBSD 10.1 I think and already been thru several updates and upgrades.

The merging process always seems to be a bit of a dark art and requires a google of mastering VI to edit and save the resultant files. I always seem to end up with a couple of orphaned .SO files too, so some libraries don’t work. This then provides me hours of fun working out which package or port didn’t actually build properly in the upgrade process and attempting to rebuild again via ports. More by Trial and Error I get the. necessary files working and limp on for another year.

So, currently trying to decide if to go the latest and greatest v13 or just take the v12.3 release that is still only in RC editions or wait until 12.3 is out proper. Current thinking is 12.3 General Availability release and use the time to re-investigate AWS and Azure FreeBSD offerings and create a fresh image in the cloud on FB13 and move my dabbling to there.

Freeing update disk space

Since moving house, the FreeBSD box has reverted to a headless server, with all updates taking place via the CLI. I have Webmin installed but this is mainly as a backup / alternative and to view a few things graphically, like disk space.

I use the the OneDrive port to manually sync my OneDrive as an offline copy as I only run it on demand so works as an offline copy. The downsize is that I am rapidly running out of disk space as never envisaged a 1Tb store to be backed up to the FB disks.

With the combination of Headless Server and no longer a need for the X interface, I was left with a large number of X applications no longer needed. Just judicious use of PKG Delete and port deinstall I removed a fair amount of unnecessary applications. I thought that was that, but then reminded of the PKG AUTOREMOVE command which freed up another 4Gb from the main drive and 286 packages from the tree. As I look to update to the next branch of the FreeBSD upgrade stream, this should ease the amount of packages and data that will need to be processed to complete the upgrade.

LetsEncrypt update failures

As the server is hosted from home, sometime the droning of the Hard Drive and fans annoys me and as I only host this for fun and self learning it occasionally gets turned of. This meant it missed the Cron Jobs that that should have replaced the cert long before renewal date.

Added to that, when I moved I locked down the router config and only allowed port 443 thru to the webserver to only permit TLS/SSL traffic and not plain HTTP. In the main this has worked well, but also meant the certbot script failed to renew the cert on demand as it could not write to the .wellknown folder on port 80.

So, now port forwarded port 80 to the server and the certificate has updated as required.

Clearing up disc space

my ports \distfiles directory had grown to over 30Gb in the 4 years since I last did a complete clean fresh build and as its something of a test rig for trying stuff out, it had grown rather large.

by running the following commands I regained c22Gb of disc space, ready for which ever pet project I choose to tinker with next.

sudo portmaster --check-depends 
sudo portmaster --check-port-dbdir 
sudo portmaster -s 
sudo portmaster -y --clean-distfiles