If you chose to configure mirror vdevs, you can add new mirrors of smaller disks and then remove the old mirrors one by one. but keep in might that this might take a lot of time due to the rebalancing each vdev removal triggers; so especially with spinning rust and SATA each removal job might take a full day.
If you opted for raidz, then NO. those vdevs can't be removed or scaled down in any way. That's why you should always go for mirrors unless you have a *very* good reason not to...
OTOH: currently the "sweet spot" of cost per TB for spinning drives seems to be at the ~8-10TB models while 4TB drives are becoming rather scarce nowadays and cost way more per TB. So instead of 12x4TB I'd go for 6x8TB or even 4x12TB, create a new pool and just zfs send|recv
all datasets onto the new pool...
EDIT: sorry, I totally overlooked that you want to go for SSDs.
I'm currently also migrating my home server from a collection of 10 3-4TB SAS HDDs of various age, some older SATA SSDs and 2 NVMes to all flash...
That mentioned "cost/TB sweet spot" seems to be sitting at 2TB for higher-endurance consumer SSDs now, but if you are willing to trade some endurance for space, this goes up to 4TB or even 8TB.
If you don't have a lot of I/O to that pool, the Samsung 870QVO 8TB are quite cheap at ~75-80EUR/TB and with 2.88TB TBW they should have 'OK' endurance for a low-load pool in a home server. At 4TB the Transcend SSD230S are a bargain at ~60EUR/TB and at 2.24TBW they have almost twice the endurance of the samsung drives.
Given that I've used several Transcend SSD230s and MTE220s and they were pretty great endurance-wise I'd definitely go for those. They might not be the fastest, but in a pool with multiple vdevs they can still easily saturate a 10Gbit link and offer by far the highest endurance rating in that price segment.
Another option might be to watch out for a good deal on used enterprise SSDs that haven't seen much writes. But it will be hard to find something >2TB for a better per-TB-price than those Transcend SSDs.
While you *could* add/remove single vdevs on a mirror pool to migrate from HDD to SSD, I wouldn't do that as providers/vdevs with vastly different I/O characteristics might (will) lead to very weird and unexpected behaviour of the pool. Given the experiences with dying disks that "only" showed increased latency and still dragged the whole pool to an unusable state, I'd expect similar behaviour if some providers suddenly have 1/100th of the latency and IOPs capabilities higher by orders of magnitude. This won't go well with how ZFS is trying to spread and queue load across vdevs...
So still: Just create a new pool and send|recv