Past Entries...

posted this in: General, Servers, Software, Technology
455 Words

I’ve got several servers which I work on, and quite often, this involves running regular cron’d tasks that perform various backups and configuration updates for me at odd schedules (as an example, my Rust server wipes fortnightly, and needs a config update to change the server name to reflect the last date wiped).

To do things like this, I’ve usually just written a script in PHP and run that at a given interval (daily or otherwise). There’s no real reason I chose PHP to write these scripts aside from familiarity with the language, and no doubt the rest could be easily achieved be it through Python, Shell Script or any other language out there.

For now though, PHP serves my needs just fine.

The problem is, I don’t actually keep these scripts backed up anywhere, or organised in any sort of manner!

The age of GitLab

Over the last couple days, I’ve implemented GitLab into my homelab stack (JT-LAB), and will be using it to store most of my code as a “source of truth” and subsequently sync things to GitHub afterwards (depending on the projects of course).

To the Game Servers, Four Branches…

Based off the various server types; specific branches would be used. For now, these would be:

  • Rust
  • Minecraft
  • Factorio
  • Satisfactory

Each game would be represented in its own branch, and based off that branch, would deploy a specific set of commands as needed. For the most part, only Minecraft retains itself in persistence, and the rest rely either on a voted wipe, or scheduled wipe paradigm.

To the File Systems, Five Branches…

Then we have servers with actual file resources and assets that I’d like to keep; things like Photos, Design Assets, old code references, etc. These would be:

  • Media
  • Design
  • Research
  • Education
  • Maintenance

And nine, nine branches were gifted to the Websites

I also run a number of websites for friends and family on a pro-sumer level. I won’t really list these projects, but they do total up to 9! So it all kind of fits the whole LOTR theme I was going for with these titles.

One Repo to Rule them all…

The decision to build everything into one repository to manage all the core backup operations means I have less to track; for a personal system, I think this is fine. Monolithic design probably isn’t the way to go for a much larger operation than mine though!

Announcing…

Cronjobs

So this is the hypothesized project I’d like to build over the next few days; in combination primarily with jtiong.dev which will help track the commits and such that I do. Writing these projects up here as project whitepapers on a more formal basis might help with some resume stuff going forward for my future career 🙂

posted this in: Servers, Software, Technology
217 Words

With the implementation of my JT-LAB homelab; it stands to reason I should probably self-host whatever I can to try and get a decent use out of the stupid amount of money I’ve poured into the project. Being able to ensure I’m only sharing the data that I want to share as well (for whatever reason) – is pretty important to me as well.

Normally, the majority of my code to-date, has been stored on Github (which is fine, it’s fantastic and it’s an amazing free resource for the world). But my workplace had an implementation of Gitlab that I thought was done pretty well.

So, I’m going to take it upon myself to implement GitLab into JT-LAB, and make sure there is a version of my work that runs from GitLab. Github will essentially become my backup for code (Github being way more reliable in uptime than anything I’d run like GitLab etc.)

Why is this so important?

GitLab is going to work as my core repository and project management system; with it, I’ll be able to store and update my code for various projects as previously mentioned

What’s the challenge?

Well, after some wrestling it’s implemented – however the one area that I’m most unsure with, is the AutoDevOps feature of GitLab.

Lots to learn!

posted this in: General, Personal, Software, Technology
400 Words

So, I’ve got a “main” website – https://jtiong.com (which is currently Error 500’ing)

Which runs on a fairly old version of Laravel. Since it’s inception; the site was used mainly as a central one-stop shop for everything about my presence on the internet. Oh how times have changed.

Nowadays, it makes more sense with a number of domains I own, to split up the content and footprint of my stuff on the internet from a singular jtiong.com website, into a number of different sites based upon what people trying to find me for, or to categorise the activities I do.

Domains I have include:

  • jtiong.blog (this site) – my personal blog, which is strictly just personal, non-professional stuff
  • jtiong.dev – where I hope to eventually host some sort of software development info about myself
  • jtiong.network – currently a serverless site experiment, however I hope to change this
  • jtiong.com – a central landing page from which people click through to the other domains

So what does this mean?

Two new projects! The .com and .dev domains which will be important as part of my “online resume” so, I really should get them done sooner rather than later…!

However, this also means I need to really look into how I implement these!

Laravel will be driving:

  • jtiong.com – a landing page/gateway system
  • jtiong.network – services and resources for friends & family

I’m looking at using the Socialite package for Laravel to integrate login via Discord, this’ll mean that certain links and features will only be visible based off friends & family that have certain roles in my Discord server; or at least, that’s been the original intent.

My Own Framework (which I call Spark) – will be driving:

  • jtiong.dev – dev blogs, resources

This dev site will be more of a technical dump to keep me consistently working on my coding skills. The setup of this site is a traditional website that’ll ride on the tails of my intended GitLab installation. The fallback of course, is to just use the GitHub API, but I’ll only start looking at that later.

The site should just start listing out my commits and on what projects they’re made on to try and keep things accountable and interesting. It’s just a cool little showcase project.

More features might be added later relevant to doing development work in the future!

posted this in: General, Hardware, Servers, Technology
107 Words

So it’s been several days since my last post about C States and power management in the Ryzen stuffing up Unraid OS.

I’m happy to report that things have been rock solid and for the last 90 hours or so, I’ve been solidly downloading my backups from Google Drive (yes, many years worth of data) onto the server. At the same time it’s been actively running as an RTMP bridge for all the security cameras around my house, and as an internal home network portal – all without falling over.

Here’s hoping I didn’t just jinx it….

Update: 8th August 2022 — Unraid’s been running solidly for 10 days + now!

posted this in: Hardware, Servers, Software, Technology
157 Words

I recently chose to go the Unraid route with my media storage server; I was lucky enough to be given a license for Unraid Pro, and straight up, let me say:

  1. It’s easy to use
  2. It’s beautiful to look at
  3. It’s stupid simple to get working

BUT

My server uses an old spare desktop I had lying around:

  • AMD Ryzen 7 1700 (1st generation Ryzen)
  • 32GB DDR4 RAM
  • B450 based motherboard

But therein lies the problem. It turns out that Ryzens crash and burn with Unraid by default. You need to go into your BIOS settings, and turn off the Global C States power management states settings. Insane.

Why am I writing about this?

Because it took me 2 weeks to reach this point, wrestling with Windows storage, wrestling with shoddy backplanes in my ancient server chassis (which I then ordered a replacement case which set me back a pretty penny); new SAS controller; new SAS cables…

This is an expensive hobby, homelabs.

posted this in: Hardware, Servers, Software, Technology
238 Words

Local media storage. Yeah.

That’s right, I’m running Windows 10 Pro for a home server 😂

It’s been good so far, the machine is pretty old, but it is there for running things like local media storage, maybe a few other things that aren’t GPU reliant. It has an ancient PCIe 1x GPU in it (a GT 610 haha) that can’t really do anything more than let me remote in and work on the PC.

Although I do definitely want to run:

  • Core Keeper
  • V Rising

On the PC for friends and family to check out 🙂

Storage is a bit interesting; I forked out for StableBit Drive Pool and StableBit Scanner (there’s a bundle you can get) and it’s a simple GUI to just click +add to expand my storage drive with whatever randomly sized hard drives I have.

Why’d I do this instead of the usual zfs or linux based solution?

Mostly to keep my options open; it’s nice seeing a GUI and if Windows can handle my needs for my local network, I’m not doing anything extremely complicated, and the “server” it’s on is going to act as a staging ground for anything pre-gdrive archive.

I could just as (probably more) easily achieve the same results doing this over something like Ubuntu Server; except for the game servers mentioned above. There are some games that just require a Windows host much better, so this is what this machine is for.

posted this in: Hardware, Servers, Technology
58 Words

The JT-LAB rack is finally full; all the machines contained within are the servers I intend to have fully operational and working on the network! Although not all of them are turned on right now. There’s a few machines that need some hardware work done on them; but that’s a weekend project, I think 🙂

Racked and fully loaded…!
posted this in: Gaming, Servers
22 Words

So the new Minecraft version is out, and with it I’ve created a new Vanilla server for my friends to play on.

posted this in: Ramblings, Servers, Technology
140 Words

Just a short little update to myself that I’ll keep. I’ve acquired:

  • Dell R330 – R330-1
    • 4 x 500GB SSD
    • 2 x 350w PSU
    • 1 x Rail Kit
    • INSTALLED AND READY TO GO
  • Dell R330 – R330-2
    • 4 x 500GB SSD
    • 1 x 350w PSU (need to order 1)
    • 1 x Rail Kit (just ordered)
    • Awaiting PSU, and Rail Kit

These are going to help me decommission my Dell R710 servers. Trusty as they are, they’ve reached their end of life, for sure. I’ll keep them as absolute backup machines; but will not be using them on active duty anymore.

R330-1

  • Websites

R330-2

  • Rust (Fortnightly)
  • Project Zomboid (Fortnightly)
  • Minecraft (Active Version)

It’s actually been pretty tricky keeping a decent track of everything; so I’ve recently signed on for some Free Plan tiered Atlassian services using Jira and Confluence. Something a little formal for my use.

posted this in: General, Servers, Software, Technology
564 Words

April and May’s been a busy time for both technically for work, and at home with JT-LAB stuff. Work’s been crazy with me working through 3 consecutive weekends to get a software release out the door, and on top of that working to some pretty crazy requests recently from clients.

I had the opportunity to partially implement a one-node version of my previous plans, and ran some personal tests with one server running as a singular node, and a similarly configured server with just docker instances.

I think I can confidently say that for my personal needs, until I get something incredibly complicated going, sticking to a dockerised format for hosting all my sites is my preferred method to go. I thought I’d write out some of the pros and cons I felt applied here:

The Pros of using HA Proxmox

  • Uptime
  • Security (everyone is fenced off into their own VM)

The Cons of using HA Proxmox

  • Hardware requirements – I need at least 3 nodes or an odd number of nodes to maintain quorum. Otherwise I need a QDevice.
    • My servers idle at something between 300 and 500 watts of power;
    • this equates to approximately about $150 per quarter on my power bill, per server.
  • Speed – it’s just not as responsive as I’d like, and to hop between sites to do maintenance (as I’m a one-man shop) requires me to log out and in to various VMs.
  • Backup processes – I can backup the entire image. It’s not as quick as I’d hoped it to be when I backup and restore a VM in case of critical failure.

The Pros of using Docker

  • Speed – it’s all on the one machine, nothing required to move between various VMs
  • IP range is not eaten up by various VMs
  • Containers use as much or as little as they need to operate
  • Backup Processes are simple, I literally can just do a directory copy of the docker mount as I see fit
  • Hardware requirements – I have the one node, which should be powerful enough to run all the sites;
    • I’ve acquired newer Dell R330 servers which idle at around 90 watts of power
    • this would literally cut my power bill per server down by 66% per quarter

The Cons of using Docker

  • Uptime is not as guaranteed – with a single point of failure, the server going down would take down ALL sites that I host
  • Security – yes I can jail users as needed; but if someone breaks out, they’ve got access to all sites and the server itself

All in all, the pros of docker kind of outweigh everything. The cons can be fairly easily mitigated; based off how fast I file copy things or can flick configurations across to another server (of which I will have some spare sitting around)

I’ve been a little bit burnt out from life over May and April, not to mention I caught COVID during the end of April into the start of May; I ended up taking a week unpaid leave, and combined with a fresh PC upgrade – so the finances have been a bit stretched in the budget.

Time to start building up that momentum again and get things rolling. Acquiring dual Dell R330 servers means I have some 1RU newer gen hardware machines to move to; freeing up some of the older hardware, and the new PC build also frees up some other resources.

Exciting Times 😂