This was a problem in the early days, but I don't recall experiencing it.
We have seen talk of corrective action but do not know if it has been completed.
Is this actually still a problem today?
Edit:
An fsck using su+j journal may not necessarily succeed if the HDD on which the writes are actually taking place is powered down.
In this case a normal fsck is performed.
Of course, if the HDD has no writes, fsck will be terminated immediately.
Is there enough confidence in the journal to recommend su+j instead of su on HDD that are constantly being written to?
With su+j, the journal is only written to when files or directories are delete. That's it. Soft-updates handles metadata writes, that's all. All data writes are not journaled. Constant writes of data to soft-updates is the same as constant writes w/o soft-updates. The only difference is when metadata is written (including indirect blocks).
If you want all data writes journaled too, gjournal is your only option.
Can soft-updates handle lots of data writes? Yes, because soft-updates only handles metadata?
If you want to journal user data as well, then use gjournal. Their paradigms are different.
Talking about different paradigms, all other filesystems, i.e. EXT3/4, XFS, NTFS, JFS, BTRFS, and ZFS, only use the journal for metadata writes, just like su+j does with (file deletes). None of them journal user data like gjournal does. Again, if you want the protection of user data journaling, use gjournal.