I know this is bad practice, but I have never really done backups of any kind...
There are two kinds of people: those that religiously do backups, and those that have not lost data YET.
Now, I need to setup a public web server that will hold user data.
So you are holding data that other people have given to you, and you feel that you need to be trustworthy, or that your users have reasonably high expectations that the data is still going to be there. That makes perfect sense.
My plan is to upgrade the hardware and start fresh with FreeBSD.
An excellent choice of OS.
What would be a good approach for data redundancy / backups,
First, we need to talk a little bit philosophically about what you are trying to accomplish.
Are you interested in availability? For example, if there is a power outage at the server location, do you need to continue serving? In that case, I might recommend getting a UPS (so your server doesn't crash if the power is out for a fraction of a second, or 30 seconds), perhaps combined with an automatic-starting backup generator (we have a 500 gallon propane tank and a 17kW generator at home ... not for the server, but for the water pumps). But if you are going to that level, you also need to worry about network connections. How are you going to get reliable networks? I've heard of people getting both phone company DSL and cable modem, which leads to the (infamous) acronym "Redundant Array of Incompetent Public Utilities", or RAIP-U.
All joking aside: If I had to actually serve with any sort of expectation of availability, I would no longer attempt to do it myself. Instead I would go to one of the cloud providers (there are several large ones, and a myriad of small ones), and rent a small server. All the major providers have a plan for "free" machines, as long as you only need a small amount of CPU/memory/disk/network. Several of them offer FreeBSD. I personally have a tiny FreeBSD server that is a virtual machine "rented" (for free) from Google Cloud, look in this forum for my post about that, but there are also other providers.
Next step, storage availability. All disks fail, some faster than others, but always sooner than you want. If you have any expectation of availability, you *will use* RAID. It is not avoidable today. Why? Otherwise, if your only disk fails, you will be spending literally days restoring the data from backups, and not only are several of your days ruined, but also your server is down for a few days. With the size and reliability of disks today, doing 1-fault-tolerant RAID (like simple mirroring, a.k.a. RAID-1, or parity-based RAID with a single parity disk, like RAID-5 or RAID-Z is no longer sufficient; the probability of getting a double fault when one disk has already failed is approaching 1 as disks reach about 10TB (which they are today). At home, I still use simple mirroring, but I have extremely good backups, and only data that I can live with being gone for a day or two. For a server, I would today use triple mirroring (3 disks, with 3 copies of the data), and then use reasonably large disks; ~10TB should be more than enough for most small servers. If you need more space, go get about a half dozen to a dozen disks, and run RAID-Z2. At this point, you're looking at a pretty expensive server chassis.
Now in reality, I again think it doesn't make any sense to do this yourself. Go to Amazon AWS, Google Cloud, or Microsoft Azure, look at the price sheet, and rent yourself the CPU, network, and storage you need. It will be cheaper in the long run. And much less hassle. But less fun, if you're the kind of person who thinks that building your own computer is fun.
So far, we have NOT talked about backups at all. All the stuff we did so far was redundancy for availability and resilience of the data against hardware failures. Backups have a different purpose: Even with 12 disks running RAID-Z2 (or a really good setup rented in the cloud), a clueless admin, rogue employee or evil hacker can still do "rm -rf /", and the super-reliable and fast file system will permanently and thoroughly delete all your data. Or you could have a small fire at your house or office, and all three copies in your small server all burn. That's why it's important to have backups: not against disk failures, but against human failures, and off-site backups against destruction of a whole site.
Here would be my suggestion: (a) Use a commercial cloud service. (b) Otherwise, get a server chassis with 3 ... 12 data disks, and run ZFS, either with 3-way mirroring or RAID-Z2. Absolutely use ZFS rather than any other file system, since it gives you RAID built in, plus checksums for data consistency protection. (c) Find some backup software, and do backups, frequently. For example hourly to a separate disk drive, and daily or weekly off-site.
For backup software, I have no concrete suggestion. At home, I use something I wrote myself, which is full of bugs and idiosyncrasies, but happens to work for me: it makes an hourly backup to a small backup disk (which is right next to server, in a 1300 lbs fire-proof safe), and weekly backups to a disk which is carried off-site. At work, there are infinitely complex backup systems, which are hard to understand.[/QUOTE]