I end up agreeing with you on everything, I'll just add details.
Type man rsync
, then search for -x
by pressing /-x
<Enter>. It will jump directly to this line:
-x, --one-file-system don’t cross filesystem boundaries
Tried that, but didn't work: "/-x" jumps to the line that has "--xattrs, -X". Searching for "/filesystem" interestingly doesn't find anything (I wonder why not). So what I did is to go to the web-based man page on the FreeBSD site, and then search within the page. But that shouldn't be necessary; for a tool like scp (which is nothing but cp but with remote nodes via ssh), the man page should be dozens or a hundred lines long, not a confusing 4000.
But spaces can be used, you just have to keep in mind that the names are parsed twice – first by your local shell, and then by the remote shell, so you have to quote them twice (as you have noticed).
For me, who knows how shells work, that's reasonably easy. Typically I get it on the second try. Again, for a simple tool like scp, it should work on the first try: if the command works for cp, it should work for scp, just with "host:" or "user@host:" prefixed.
The real brokenness of scp appears when you look at commands like this: "scp somefile remote:'$(touch gotcha)'"
Absolutely. This just has to be disallowed scp. Fortunately, one hardly ever does that by mistake: few people have files in their file system that are called "$(touch gotcha)", although it is theoretically possible. But file names with spaces are (unfortunately) somewhat common.
All this could be fixed by a new version of scp not using simply the ssh protocol, but a dedicated protocol, that distinguishes between already parsed and expanded file names, and command line options. In reality, this all points out a more fundamental problem of the Unix CLI user interface: it doesn't distinguish parameters in general (for example file names, which are nothing but opaque strings) and options, and packs then all together in arguments. The classic joke is a person who create a file named "-R", and then wonders why "rm -f *" recursively deletes subdirectories too, even though the user clearly didn't intend that. Now take that known (and 50 year old) brokenness, and add double-parsing everything on commands that go over the network, and it becomes more user-inimical.
The problem is that scp cannot be fixed without giving up backwards-compatibility. Therefore, the best option would be to write a new tool (with a different name!) that uses a different protocol (e.g. sftp, or a completely new protocol), while retaining a usage similar to scp.
Agree. Something like scp is needed, just deleting the current one creates a hardship. But the new one has to be better, and that will make it different, so it also needs a different name.