Dumb question. Usually, user-space programs use the following logic to identify mount points: If a directory has a different device (the st_dev field in the stat structure for the directory) from its parent directory, then it must be a mount point. Traditional (single-disk) file systems use the device number of the underlying disk as the st_dev number. This doesn't work for file systems that don't have a 1-to-1 correspondence of mount points (or similar entities) to devices; it also fails for interesting form of mounting (like union file systems).
So try the following: Do a "stat -s" command on what you think are all the mount points, and look at the st_dev fields. They should all be distinct, but then inherited by all files and directories in that file system. Then do a "mount" command; the list of file systems should agree by both methods.
This is a problem for most modern file systems. What most do is to use very large random numbers for st_dev. In theory, it is considered sporting if file systems change the st_dev field for every mount point, and return something that is unique. Note that the concept of "mount point" can be tricky in modern file systems. My personal definition is two-fold: A mount point is something that can come and go, depending on administrative commands on the file system, which take the place of "mount" in traditional file systems. And file system boundaries should also be places where the output from the df command (or the equivalent statfs and statvfs system call) changes, in particular for space usage / quota / accounting purposes. But these two definitions can both give silly answers. For example, a file system may have hundreds of snapshots ... are those all individual mount points? And in some cases, they won't be visible in the readdir() call, so a utility that walks the tree won't even find them. And things like "space usage and availability" can be an impossible concept in modern file systems.