Solved Is there an alternative for CrystalDiskMark?

I have a Samsung SSD, which freezes Windows with CrystalDiskMark random 4k Q8T8 test. I'd like to run the same test on FreeBSD, but I haven't found any real solution yet. It is possible on Linux with FIO, but it lacks the libaio engine when I install it from ports. In theory it is possible to compile it with that engine, I am just not familiar with C++, so I have no idea if this is something simple, or I need to ask the maintainer to do it for me. For Linux there is a KDiskMark project, which does the same as CrystalDiskMark, but it is not in ports and it has some requirements too, so I got the impression it would take at least a week or more to figure this out. I don't have the time for it, so I am looking for alternative benchmarks, which could do the same. Is there anything?
 
Synthetic benchmarks on SSDs (or in general flash, such as NVMe) tend to give misleading results. That's because they are so fast, you're really testing the performance of the rest of the stack. In the real world, SSDs are underneath the file system, and get used from the kernel. In silly benchmarks, they are used from userspace. The problem is that the user -> system transition for direct IO is a subsystem that is otherwise not utilized heavily, and frequently not performance tuned. That's particularly true for aio, which is a stepchild of OS implementation, because nobody who does serious work needs it. So if you run a benchmark, you're testing an irrelevant backwater of the system, and the results are unlikely to be representative. There is an exception to that, namely operating systems (such as AIX or HP-UX) that have been carefully tuned to best performance with databases that bypass file systems (such as DB2 or Oracle). On those, you can run benchmarks that mimic the database IO pattern and IO access method, and expect somewhat realistic results.

The other problem is this. In the real world, SSDs are used as part of a storage stack, typically with a file system (or database) on top. The performance of the system depends heavily on how the file system uses the SSD. That's because even more than spinning rust, the performance of SSDs depends on the IO pattern: size of IOs, spacing (sequential versus random versus interleaved versus strided), queue depth or asynchronicity. SSDs have way more software in them than disks, and their FTLs are very complex (look in the research literature for papers on FTLs), and the FTLs explicitly tune for workloads.

The usual benchmarking advice applies: the only real and relevant benchmark is your workload. Configure the system for production, tune it, and run whatever workload you need to serve, and measure the performance.
 
  • Like
Reactions: a6h
Synthetic benchmarks on SSDs (or in general flash, such as NVMe) tend to give misleading results. That's because they are so fast, you're really testing the performance of the rest of the stack. In the real world, SSDs are underneath the file system, and get used from the kernel. In silly benchmarks, they are used from userspace. The problem is that the user -> system transition for direct IO is a subsystem that is otherwise not utilized heavily, and frequently not performance tuned. That's particularly true for aio, which is a stepchild of OS implementation, because nobody who does serious work needs it. So if you run a benchmark, you're testing an irrelevant backwater of the system, and the results are unlikely to be representative. There is an exception to that, namely operating systems (such as AIX or HP-UX) that have been carefully tuned to best performance with databases that bypass file systems (such as DB2 or Oracle). On those, you can run benchmarks that mimic the database IO pattern and IO access method, and expect somewhat realistic results.

The other problem is this. In the real world, SSDs are used as part of a storage stack, typically with a file system (or database) on top. The performance of the system depends heavily on how the file system uses the SSD. That's because even more than spinning rust, the performance of SSDs depends on the IO pattern: size of IOs, spacing (sequential versus random versus interleaved versus strided), queue depth or asynchronicity. SSDs have way more software in them than disks, and their FTLs are very complex (look in the research literature for papers on FTLs), and the FTLs explicitly tune for workloads.

The usual benchmarking advice applies: the only real and relevant benchmark is your workload. Configure the system for production, tune it, and run whatever workload you need to serve, and measure the performance.
If you read the question, this is a specific case, where the SSD freezes the entire operating system by running a certain type of test. It is abnormal, and I tried to figure out if the problem is with the SSD or the OS itself. I did not have much time to figure this out, so I ended up installing Ubuntu and ran the KDiskMark on that. Sadly there is no FreeBSD port, that does the same. It worked fine on Ubuntu without any stuttering or freezing, so it looks like the Windows driver of the SSD has a bug. It is not a big deal, because I'll install an UNIX system on that SSD, probably FreeBSD, not a Windows one.
 
Back
Top