A memory overwrite can cause anything. The behavior is undefined. There was a joke footnote in an old C textbook or standards document which said something like "undefined means: it could reformat your disk". Honestly, if you had reported that after overwriting memory, there were purple elephants flying around your computer, I would have been only mildly surprised.but still can not wrap my head around how a memory overwrite can cause a function to not be found?
Now a little more serious: Think about how "dynamic linking" must work. When the compiler/linker prepares the executable for your program, they typically don't actually "link" a copy of the standard libraries (such as libc, which contains all the usual string and IO functions) into your executable file on disk. Doing that would be very wasteful; you would end up with lots of executables using disk space, each for storing copies of the same functions (such as open, write, printf, strcpy, ...). And at runtime, there would be lots of copies of all these functions in memory, even more wasteful. So instead what happens is that your "executable" on disk is actually not immediately executable. Instead it contains a little stub memory area, which says something like "If we need to call printf at runtime, please look in the dynamic libc library such as /lib/libc.so, make sure one copy of it is loaded somewhere in memory, record here where it is, and then call the printf function at that address".
Now user noahbar comes and overwrites that little bit of memory which tells the dynamic linker where printf really is. You see the problem?
Anecdote from many decades ago: In my mis-spent youth (I was actually in my 30s), I got paid to program in C++ on a Windows 3.1 machine, using the Waterloo memory extender (so we had a flat 32-bit address space, none of the 640k limitations of the original MS-DOS). During development, we discovered that sometimes after running our program, you couldn't print any more, and any attempt to use the "print" command from the command line would hang the computer. Strange, isn't it? It turns out to be completely logical: At physical address zero, MS-DOS keeps an interrupt vector table, some of which is used for printing. The first dozen bytes are not actually commonly used, so you can actually write and read memory using a NULL pointer, as long as you only do a little bit of it. If you do too much of it, then the first thing that suffers is the print command, and if you try to use hundreds of bytes, things blow up and the system crashes. But the scary thing is: You can have bugs in your code where you use NULL pointers, but your code continues working so-so. One of the best things about moving to Windows NT was that suddenly, the attempt to use a NULL pointer (even for reading) immediately crashed, so finding such bugs was much faster.