Prompt confirmation y/n to move a file...

Posix is not a good reason to keep such thing, which is actually outdated. Just let it be free, libre, and have write authorization.
An operating system should have solid, reliable binaries, whatever POSIX may recommend or not. Common sense and not rules and religions.
 
Posix is not a good reason to keep such thing, which is actually outdated.
An operating system should have solid, reliable binaries, whatever POSIX may recommend or not. Common sense and not rules and religions.
This turned into a ridiculous troll thread. You are trying to overwrite the binary of a running process, which has been locked by the OS because it is running. Do you want to fool us?
 
This turned into a ridiculous troll thread. You are trying to overwrite the binary of a running process, which has been locked by the OS because it is running. Do you want to fool us?

In any cases, a prompt for y/n can not be recommended for usage in scripts (note by default).

It is not necessary and obliged to lock it, right?

You are free to run anytime : # rm -rf /* which would delete all.
 
An operating system should have solid, reliable binaries, whatever POSIX may recommend or not. Common sense of a programmer, not rules and religions.
It has. This is good sense. Why? Easy: because it's better to be safe than sorry. Remember that the UNIX philosophy includes choosing the more common situation as the default behaviour. How many times you will try to remove a "non-writable" file/directory? And why there are not write permissions? Probably for a good reason, that is up to the user/admin to know. It's again the "empower the user" logic, while it could not seem so. I find this invaluable in an private environment, let's figure in a production one!
Regarding to POSIX: this is not a matter of religion. It is again good sense: POSIX, SUS, were born to favorite interoperability between systems (and for the mental health of their sysadmins IMHO :) ), not by caprice. "But this are rules!" - Of course. Rules, when baked by a logic, are a good thing. The page I referenced before gives you the reasons behind it.
 
It has. This is good sense. Why? Easy: because it's better to be safe than sorry. Remember that the UNIX philosophy includes choosing the more common situation as the default behaviour. How many times you will try to remove a "non-writable" file/directory? And why there are not write permissions? Probably for a good reason, that is up to the user/admin to know. It's again the "empower the user" logic, while it could not seem so. I find this invaluable in an private environment, let's figure in a production one!
Regarding to POSIX: this is not a matter of religion. It is again good sense: POSIX, SUS, were born to favorite interoperability between systems (and for the mental health of their sysadmins IMHO :) ), not by caprice. "But this are rules!" - Of course. Rules, when baked by a logic, are a good thing. The page I referenced before gives you the reasons behind it.

I think that you could see that "blocking" or "preventing" is not a mean to increase safety considering rm, cp and mv. It is actually rather the opposite. A least for binaries of such importance (mv, cp, ...) and "large audience".
The Unix philosophy tends rather to go to simple solution, does the job well. A confirmation is annoyance, it can be against these principles. I am sure that I can find a good example where this confirmation can endanger the system, due to trust that OS does what you should be taking care of. What about if we change the terminal parameters... this confirmation is hence useless. Just matter of understanding what does the terminal. These files cp and mv are the foremost important components of the operating system. Therefore they should work and be reliable. BSDs want reliable base system, right.

Anyone should be allowed to override and overwrite - by default using cp and mv for programming principles.

For more fun and human, it is definitely better to be free - Isn't it written FreeBSD?

Safety, Security,... is the job of the admin.... Likely, at least. Admin knows what is running, the risks, and what does the operating system.
Next> Next> ...Warning > Do you want to delete ? Do you really want to delete? Are you really Sure? this is Windows and annoyance.
If admin needs popups and helpful explanations, then, there is X11 applications for that, ideally suited for each of user needs.

Confirmations, or "Popups", make the operating system more unreliable especially for cp, mv and rm.

Feel free to ask for more information.
 
Looking at the history, the option has existed for nearly 35 years and people expect it to be there.

The rm from 4.4BSD Lite also has it: https://www.freebsd.org/cgi/man.cgi...manpath=4.4BSD+Lite2&arch=default&format=html
And it contains a nice little footnote; IEEE Std1003.2 (``POSIX'') compatible. POSIX isn't "old", it actively maintained and regularly updated.
Here's the 2018 definition of rm: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html

I suggest you state your case on the mailing lists and perhaps submit a PR to get the option removed. You can also try and join IEEE and get the POSIX standard redefined. Until then I give you very little hope you will get this option removed.
 
Thank you for history and your post !
I will join the POSIX rather for more specific answer.

It is clear that FreeBSD should follow POSIX; that's a standard.

Here typical case where override might happen by OS. it hangs due to confirmation y/n requirement (POSIX).
example:
Code:
 cd ; cp /usr/local/bin/mc ; screen -d -m  ./mc ;   touch foo ;   mv foo mc
Eventually, it could be leading to a stopped script during process, if the user programs a list of commands. In other words, some scripts might hang.

@Maxnix
What would you think would be the possible danger of overwritting a file on a running process, without this required confirmation ?
With an example for a better understanding.
 
System is not required to load entire file into memory for running process when process starts. What happens when running process needs to load more of process file into memory for later functions, and file is suddenly not available?
 
System is not required to load entire file into memory for running process when process starts. What happens when running process needs to load more of process file into memory for later functions, and file is suddenly not available?
The process is then killed and seg. fault likely.

What happen then if a running script hangs during a backup while running. Then, no completed script and no backup at all.
I would prefer that the backup runs well and that it is reliable.

In both cases, it is a data/script/process failure.
 
System is not required to load entire file into memory for running process when process starts. What happens when running process needs to load more of process file into memory for later functions, and file is suddenly not available?
I'm not entirely sure but I suspect the file descriptor is still open so the file isn't really deleted, it's still there until all file descriptors referencing it are closed. It's the same issue with people having a filled up /var/log and removing logfiles but not restarting syslogd(8), the space is freed only if their file descriptors have been closed. Until that happens syslogd(8) will be completely oblivious and will continue to use the, now deleted, files as if nothing happened.
 
I'm not entirely sure but I suspect the file descriptor is still open so the file isn't really deleted, it's still there until all file descriptors referencing it are closed. It's the same issue with people having a filled up /var/log and removing logfiles but not restarting syslogd(8), the space is freed only if their file descriptors have been closed. Until that happens syslogd(8) will be completely oblivious and will continue to use the, now deleted, files as if nothing happened.

no idea about /var/log ...

For some small ps processes, it does not affect at all actually.
Like mc, you can run it in memory, midnight commander will continue to work like nothing happen, and actually mc can be replaced by anything. Just memory will let mc run like nothing happen.
If the user start this overwritten mc, then, a newer ps is created and mc (new) will run like nothing happened. Nice and the user is happy. No data loss.
 
The process is then killed and seg. fault likely.

Nope. This would happen inside the kernel VM, so it wouldn't be a segfault, but very likely a panic. The same as you get when plugging off the disk with the swapspace - it is technically the same, loss of disk-backed virtual memory.

But SirDice is right, the binary file is still open.

For some small ps processes, it does not affect at all actually.
Like mc, you can run it in memory, midnight commander will continue to work like nothing happen, and actually mc can be replaced by anything. Just memory will let mc run like nothing happen.

Nope. There is no such thing as a process "running in memory". All memory pages in use must be disk-backed.[1] (Read Matthew Dillon's beautiful paper on the VM.) Simple reason: there can always be a process coming along with a higher priority and high memory demand, and then the stuff must be evicted.

[1] Exception is wired mem, which is not supposed to be paged out. (Thats why I don't like these incredibly high wired counts we see nowadays - they feed on system robustness.)
 
Nope. This would happen inside the kernel VM, so it wouldn't be a segfault, but very likely a panic. The same as you get when plugging off the disk with the swapspace - it is technically the same, loss of disk-backed virtual memory.

But SirDice is right, the binary file is still open.



Nope. There is no such thing as a process "running in memory". All memory pages in use must be disk-backed.[1] (Read Matthew Dillon's beautiful paper on the VM.) Simple reason: there can always be a process coming along with a higher priority and high memory demand, and then the stuff must be evicted.

[1] Exception is wired mem, which is not supposed to be paged out. (Thats why I don't like these incredibly high wired counts we see nowadays - they feed on system robustness.)
Thank you very much !

It helps a lot !
 
An example, little thinking...

If you open a PDF document, under Windows, then, acrobat reader (for instance) will have this opened PDF. When you would like to write on this PDF document, it will block by the (hmm) operating system. You need then to close the process reading the PDF, in order to write and to replace the PDF by another one.

Why actually should BSDs not be the same? Actually, it should too, to be more according to data safety and similar behavior.
 
If you open a PDF document, under Windows, then, acrobat reader (for instance) will have this opened PDF. When you would like to write on this PDF document, it will block by the (hmm) operating system. You need then to close the process reading the PDF, in order to write and to replace the PDF by another one.

This is actually a limitation (feature) of Windows. On macOS for example most applications are aware of externally applied changes to files they have opened, and this includes renaming, changes to the content and even deleting.

For example when I open a PDF file on the desktop in Preview, and then move the file into the trash can, then the window head immediately informs the new location. When I then empty the trash can, then Preview immediately closes the window, because the underlying document has gone and it was opened read-only. A editor leaves the document open, but when you want to save it then the editor asks for a new file name and location.

Another example. When I open the same text file in two different text editors (e.g. TextEdit.app and CotEditor.app) and then change and save text in one editor, the new text automatically and almost immediately appears in the opened text document of the other one.

Mac users expect this kind of global file state awareness. Windows users better don’t expect too much.
 
Another example. When I open the same text file in two different text editors (e.g. TextEdit.app and CotEditor.app) and then change and save text in one editor, the new text automatically and almost immediately appears in the opened text document of the other one.

Yeah, they can afford this because they are running mostly single-user. On a unix, where multi-user was always the default, you would not want your editor-contents happily changing just because somebody else had edited the underlying file - you would want to get a notification and the option to save your copy somewhere else (like emacs does).
 
Yeah, they can afford this because they are running mostly single-user. On a unix, where multi-user was always the default, you would not want your editor-contents happily changing just because somebody else had edited the underlying file - you would want to get a notification and the option to save your copy somewhere else (like emacs does).

This has nothing to do with single- vs. multi-user. In this respect there is no difference between macOS and FreeBSD. This has to do with system services which macOS provides as an easy API for application level programmers. The example of the OP was Acrobat Reader on Windows, so I choose similar full featured GUI applications on macOS only in order to give a counter example for a behaviour that he saw on Windows.

Now, when I open the very text file on macOS in nano, or vi, or emacs in the Mac terminal (all 3 come with macOS included and are ready to use in /usr/bin), then I don’t expect that these editors are aware of the changes which have been applied externally to any files which they have opened - of course not.
 
This has nothing to do with single- vs. multi-user. In this respect there is no difference between macOS and FreeBSD.

Not technically. But the feature You did describe does only make sense within a single (probably graphical) user session - which is the major use-case for a MAC. My comment was about useability, not about technology.

Now, when I open the very text file on macOS in nano, or vi, or emacs in the Mac terminal (all 3 come with macOS included and are ready to use in /usr/bin), then I don’t expect that these editors are aware of the changes which have been applied externally to any files which they have opened - of course not.

So these are not compiled with the API? *biggrin*
Indeed, as soon as I manage to find that terminal, I feel almost at home...
 
Not technically. But the feature You did describe does only make sense within a single (probably graphical) user session - which is the major use-case for a MAC. My comment was about useability, not about technology.



So these are not compiled with the API? *biggrin*
Indeed, as soon as I manage to find that terminal, I feel almost at home...

People who don’t know how to correctly spell Mac, better don’t find the Terminal.
 
So. then, on an Unix operating system, which possibility do we have to set or to allow file modification without notification ? none, likely ?

override warning stops all scripts, if process is running, can it be set to let user free running his/her scripts?

This is actually a limitation of FreeBSD, because LINUX has not such a problem.
- on FreeBSD: it hangs,
- on Linux: it works,
=> Let's be jealous of linux.
 
What’s wrong with:

cd ; cp /usr/local/bin/mc . ; ./mc ; touch test ; echo y | mv test mc

In a script you would add > /dev/null 2>&1 to the actual cp/mv/rm command, in order to suppress any stray y’s.
 
What’s wrong with:

cd ; cp /usr/local/bin/mc . ; ./mc ; touch test ; echo y | mv test mc

This is a scripting workaround.
However you can think maybe that the user would like, sometimes, to dream that his/her shell script will work without workaround - if possible.
Likely there is another workaround is to use the Linux coreutils.
 
Back
Top