Still with C. I started with C++ in the Borland time but it's overcomplex and very time-consuming to get into a large program by reading the code. Disadvantage of C may be that everything must be written in so much detail that keeping any project organized requires strict commenting and uniform structure rules. Working with long-term skeletons for everything to save time is common.
C++'s complexity means most developers can't tell you what is REALLY happening "between the semicolons". How many anonymous objects are instantiated to make a particular line of code work? How many machine cycles are required for what looks like a simple "assignment" operation?
I rely heavily on C as I write code that has to talk to bare metal. I need to know exactly what it is doing because some THING is reacting to the bus cycles that I am indirectly creating (maybe a motor is starting or a light is flashing or...).
C's biggest problem (IMnsHO) is the lack of strong type checking. E.g., only the base type is enforced so I can create a type called Apples and another called Oranges -- both synonymous with some base type -- and the compiler will gladly let me make APPLEsauce by grinding ORANGES!
I use Limbo as a scripting language for "everyday joes" because it is a bit easier to deal with than C -- yet is familiar to anyone who has written code in C and supports mechanisms that I need (e.g., multitasking/processing, GC, lists, exceptions, etc.). As said (below), I have to use tools that will enable folks to work with my products and not just plan on obsolescence.
You choose a programming language
based on the specific problem at hand.
![Jeans :jeans: 👖](https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f456.png)
Therefore asking “which languages don’t you use for which reasons?” is asking “which problems do you
not have?”
Yes. But, among the "problems" are also the types of people that you will have supporting/maintaining the codebase, going forward. Code is meant to be
written "few" times (no, not "once") but
read many times! As such, you want to pick something that others will be able to grok easily/unambiguously without having to be fully aware of subtleties that may exist in the code.
E.g.,
&array[0] is redundapetitive. But, makes explicit what you are doing (vs. using a simple pointer type). What
might handle=>method(args) mean (given that you've almost certainly NEVER seen that notation)?
I for one don’t program in C. This is
because of the problems I face. C is good for
single‑board computers, the task being e. g. regulating HVAC actors based on real‑time measurements. If I faced such a problem, I’d
consider programming in C.
C is – in comparison to other languages – a terrible choice e. g. if you want to program a graphical user interface, or problems requiring (built‑in) mathematical accuracy.
As far as I understand C lets overflows just happen.
Sure. And, you can walk a pointer past the end of memory without the compiler complaining. But, you can also divide by zero, dereference a pointer to something that doesn't exist (or is of the wrong type), etc. Deciding against C because of overflow handling is just silly. Or, IMO, any of these other issues.
Too many languages attempt to treat developers as children -- ensuring that they can't "do bad things". Instead, the emphasis should be on educating them as to
which things are bad -- and why. And, not hamstringing them out of some misplaced belief that you are HELPING them!
Pick a language. How would you write the code that handles maintaining cache consistency in that language? Or, that does TLB shootdowns? Or, can access and preserve/restore the machine's state. E.g., write an OS in <your favorite language>. Now, port it to another CPU...
I have a math library that allows for arbitrary precision "rationals". I.e., every value is a (Big_Integer, Big_Integer) tuple -- numerator and denominator. In <your favorite language>, how do you compute a value to 95 significant digits? How do you accurately represent 1/3?
I design embedded systems. BIG systems (e.g., my current kernel is 100KLoC of C/ASM). And, systems that have to be reliable and available (my current project uses 288 processors), all dealing directly with hardware (no "payroll programs", here). A language that hides too much from me is nearly impossible to evaluate in that regard.
Do you even KNOW how big your code will be when executing on the iron? How will you know if the hardware that you have designed will support the code you will write for it?
Wrt C allowing overflow to happen:
the hardware allows them to happen. If you were coding in ASM you would be aware of the possibility of overflow in your code (but would typically have access to a flag in the PSW that you could examine). OTOH, if you know your code is vulnerable to overflow, you can explicitly check for it.
[Hmmm... I can't figure out how to paste quoted text when EDITING my post -- website bug?]
"Infinite multiple precision operation, usually done as (packed)BCD, should not have the overflow problem if memory size allows, but would be much more expensive even if CPU has supports for it."
Overflow is always a problem -- because you don't have infinite resources (and can't constrain the operations that one might want to perform). You don't need a BCD data type. Nor do you need any particular "esoteric" feature(s) in the CPU to support it. But, the LANGUAGE in which you craft that software package can go a long way towards making this easy to implement -- or ghastly difficult!