Which programming languages don't u use,and for which reason

Rust. I can't understand the hype.
<soapbox>
I *mostly* agree with you with respect to C..... but Rust as a replacement for C++ (or java) is verra, verra nice: Powerful language features throughly integrated with the entire language spec and not *seemingly* tacked on (C++ again: mostly C compatible base with stuff tacked on that sometimes works in wildly different ways. Error handling within C++ is truly the worst I've ever experienced). And it produces fast (if large) executables.

Now, rust isn't perfect either. It's still a toddler and changes *far* to frequently as the equally immature developers add the trendy feature-de-jour. Trying to use some very basic package only to find out that it requires a newer version of rust from the, you thought, newish version you were already using is super bloody aggravating. But, speed combined with memory safety is pretty nice. Compared to C (I've only ported two of my own smaller projects to rust so far), the amount of code I *don't* spend checking for nulls or other errors not part of the actual solution is almost intoxicating. Rust also has some strange ideas that really chap my hide. It doesn't want you to put parenthenses around things unless there is absolutely no other way. It complains overly much about unused variables and things. Et Cetera. These warnings can, however, be disabled. So that's not too bad, just annoying

I still think in C, though. And I probably always will.
</soapbox>
 
Rust also has some strange ideas that really chap my hide. It doesn't want you to put parenthenses around things unless there is absolutely no other way.
I think it almost fatally bad. If (not limited with any language) a specific compiler/interpreter has a fatal mathematical bug that causes X=9 when stated X=1+2*3 (of course, X=7 is correct), writing code as X=(1+(2*3)) should workaround it. It SHALL be always encouraged, in my hubmble opinion.
 
Still with C. I started with C++ in the Borland time but it's overcomplex and very time-consuming to get into a large program by reading the code. Disadvantage of C may be that everything must be written in so much detail that keeping any project organized requires strict commenting and uniform structure rules. Working with long-term skeletons for everything to save time is common.
Never really used any newer languages and compilers. I seem to be able to get everything outside of the program done with shellscript and /usr/bin programs. No need for other high-level languages.
 
There are too many languages to list which don't use.

From popular - I don't use Java and JavaScript. I try to avoid C++. It is unnecesarily complicated especially latest versions. Prefer plain C and C#.
 
I think it almost fatally bad. If (not limited with any language) a specific compiler/interpreter has a fatal mathematical bug that causes X=9 when stated X=1+2*3 (of course, X=7 is correct), writing code as X=(1+(2*3)) should workaround it. It SHALL be always encouraged, in my hubmble opinion.
You are correct. Rust doesn't go that far, however. It just complains about the outer most parens if they're, technically, unnecessary:


Code:
x=y+(m*n); // no warning

x=(y+(m*n)); // warning because that outer pair of parens does nothing *real*

This is still bad, but only in a slightly irritating way. I like extra parens to clairify the equation and my evaluation order.

The place I hate the rust warning the most is with if statements. I like parens around my entire if condition even if rust doesn't. :-(
 
Mostly because I do a lot of embedded design on smaller uPs, I don't like C++. "It's neither C or anything ++". If you're doing dynamic memory allocation you're probably not doing embodied stuff.

Python. Would it have killed them to allow braces instead of spaces ? Any why do my machines need to have 4 different versions to support programs written in Python? I it really changing that much? It does seem like Python is the scripting language for a new generation though. "I just what to do something quickly, no matter how many megabytes it takes."

I'm stuck for web stuff with PHP/Javascript. PHP is the gateway drug for people who think in C (procedural) and are experimenting with Object Oriented. Javascript is software on hallucinogenic drugs you end up using because php was your gateway drug.
 
IPSCRAE (pig-Latin for “script”) was the scripting language for Palace, a 2D chat platform that was popular in the early 2000s. An interesting attribute was that it used Reverse Polish Notation. So, in a normal generic language, you might say:

If (a == b)
then {action 1}
else {action 2}

In IPSCRAE, it is more like this:

{action 2}
{action 1}
if (a == b)

It’s been 20+ years, so I’m sure I don’t have the syntax exactly, but that illustrates the point.

Here is the language reference manual, in case anyone is interested:
 
You choose a programming language based on the specific problem at hand. 👖 Therefore asking “which languages don’t you use for which reasons?” is asking “which problems do you not have?”

I for one don’t program in C. This is because of the problems I face. C is good for single‑board computers, the task being e. g. regulating HVAC actors based on real‑time measurements. 🌡️ If I faced such a problem, I’d consider programming in C.

C is – in comparison to other languages – a terrible choice e. g. if you want to program a graphical user interface, or problems requiring (built‑in) mathematical accuracy. 📐 As far as I understand C lets overflows just happen.​
 
C is – in comparison to other languages – a terrible choice e. g. if you want to program a graphical user interface, or problems requiring (built‑in) mathematical accuracy. 📐 As far as I understand C lets overflows just happen.
If numerical (mathematical) accuracy is the target, code the parts with whichever FORTRAN or COBOL would be the choice. But at least FORTRAN is not at all good at programming UIs, so sane linking of mixup languages is essentially important.
 
If numerical (mathematical) accuracy is the target, code the parts with whichever FORTRAN or COBOL would be the choice. But at least FORTRAN is not at all good at programming UIs, so sane linking of mixup languages is essentially important.

Does Fortran have traps on integer overflow?
 
Does Fortran have traps on integer overflow?
Not sure. I'm not using Fortran for a long time. And possibly it depends on each implementation. But if overflow/underflow cannot be captured, it should not be usable for number crunchers, which Fortran is designed for.
 
Not sure. I'm not using Fortran for a long time. And possibly it depends on each implementation. But if overflow/underflow cannot be captured, it should not be usable for number crunchers, which Fortran is designed for.

Trapping overflow is computationally expensive, though. Most of the HPC crowd doesn't like that. And with AI software they throw all that out the window with 8 and even 4 bit number types.

/insert Common Lisp advertising
 
Trapping overflow is computationally expensive, though. Most of the HPC crowd doesn't like that. And with AI software they throw all that out the window with 8 and even 4 bit number types.

/insert Common Lisp advertising
Infinite multiple precision operation, usually done as (packed)BCD, should not have the overflow problem if memory size allows, but would be much more expensive even if CPU has supports for it.

And if the CPU has a feature that generates interrupt (trap) when overflow flag is turned on, it would not be so expensive unless the overflow actually happenes. But need setting up interrupt handler for it.

Found this topic on Stack Exchange.
 
And if the CPU has a feature that generates interrupt (trap) when overflow flag is turned on, it would not be so expensive unless the overflow actually happenes. But need setting up interrupt handler for it.

I once wrote a C++ class like "int" that had some amd64 assembler in it to check the over/underflow flag after the operation. The expensive part is actually doing something about it other than calling abort().

It's a simple thing. While I kind of understand why C and C++ don't have it I was really surprised that Java also tolerates overflow with no checking. Another nail in the Java coffin for me. I learned Java right after I had a serious bug in a C program due to overflow. And I was doing Common Lisp, too, which actually does something about it unless you actually say that you want to optimize it away.
 
 
C : Too much code
C++ : Too complex
C# : Too many classes
Perl : No compile time type checking
Let's move forward:
Lisp : Too many parenthesis
Python: Way too much vertical space to express the obvious.
Rust: No Classes [ AFAIK, never used, saw no OOP, so no ]
Ruby: Too many web developers
Assembly: Too much to write
Tcl : Too many brackets
Octave: Only matrices
Mathematica: change [] with () => Lisp.
Smalltalk: Too many mouse clicks
Java: Way too much boilerplate
....
As of today, my favorite are:
Ruby for scripting, C for embedded, C++ because I use a lot of Qt software and KDE and sometimes i like to change it a bit.
 
Still with C. I started with C++ in the Borland time but it's overcomplex and very time-consuming to get into a large program by reading the code. Disadvantage of C may be that everything must be written in so much detail that keeping any project organized requires strict commenting and uniform structure rules. Working with long-term skeletons for everything to save time is common.

C++'s complexity means most developers can't tell you what is REALLY happening "between the semicolons". How many anonymous objects are instantiated to make a particular line of code work? How many machine cycles are required for what looks like a simple "assignment" operation?

I rely heavily on C as I write code that has to talk to bare metal. I need to know exactly what it is doing because some THING is reacting to the bus cycles that I am indirectly creating (maybe a motor is starting or a light is flashing or...).

C's biggest problem (IMnsHO) is the lack of strong type checking. E.g., only the base type is enforced so I can create a type called Apples and another called Oranges -- both synonymous with some base type -- and the compiler will gladly let me make APPLEsauce by grinding ORANGES!

I use Limbo as a scripting language for "everyday joes" because it is a bit easier to deal with than C -- yet is familiar to anyone who has written code in C and supports mechanisms that I need (e.g., multitasking/processing, GC, lists, exceptions, etc.). As said (below), I have to use tools that will enable folks to work with my products and not just plan on obsolescence.

You choose a programming language based on the specific problem at hand. 👖 Therefore asking “which languages don’t you use for which reasons?” is asking “which problems do you not have?”​

Yes. But, among the "problems" are also the types of people that you will have supporting/maintaining the codebase, going forward. Code is meant to be written "few" times (no, not "once") but read many times! As such, you want to pick something that others will be able to grok easily/unambiguously without having to be fully aware of subtleties that may exist in the code.

E.g., &array[0] is redundapetitive. But, makes explicit what you are doing (vs. using a simple pointer type). What might handle=>method(args) mean (given that you've almost certainly NEVER seen that notation)?

I for one don’t program in C. This is because of the problems I face. C is good for single‑board computers, the task being e. g. regulating HVAC actors based on real‑time measurements. If I faced such a problem, I’d consider programming in C.
C is – in comparison to other languages – a terrible choice e. g. if you want to program a graphical user interface, or problems requiring (built‑in) mathematical accuracy.
📐
As far as I understand C lets overflows just happen.​
Sure. And, you can walk a pointer past the end of memory without the compiler complaining. But, you can also divide by zero, dereference a pointer to something that doesn't exist (or is of the wrong type), etc. Deciding against C because of overflow handling is just silly. Or, IMO, any of these other issues.

Too many languages attempt to treat developers as children -- ensuring that they can't "do bad things". Instead, the emphasis should be on educating them as to which things are bad -- and why. And, not hamstringing them out of some misplaced belief that you are HELPING them!

Pick a language. How would you write the code that handles maintaining cache consistency in that language? Or, that does TLB shootdowns? Or, can access and preserve/restore the machine's state. E.g., write an OS in <your favorite language>. Now, port it to another CPU...

I have a math library that allows for arbitrary precision "rationals". I.e., every value is a (Big_Integer, Big_Integer) tuple -- numerator and denominator. In <your favorite language>, how do you compute a value to 95 significant digits? How do you accurately represent 1/3?

I design embedded systems. BIG systems (e.g., my current kernel is 100KLoC of C/ASM). And, systems that have to be reliable and available (my current project uses 288 processors), all dealing directly with hardware (no "payroll programs", here). A language that hides too much from me is nearly impossible to evaluate in that regard.

Do you even KNOW how big your code will be when executing on the iron? How will you know if the hardware that you have designed will support the code you will write for it?

Wrt C allowing overflow to happen:
the hardware allows them to happen. If you were coding in ASM you would be aware of the possibility of overflow in your code (but would typically have access to a flag in the PSW that you could examine). OTOH, if you know your code is vulnerable to overflow, you can explicitly check for it.

[Hmmm... I can't figure out how to paste quoted text when EDITING my post -- website bug?]

"Infinite multiple precision operation, usually done as (packed)BCD, should not have the overflow problem if memory size allows, but would be much more expensive even if CPU has supports for it."

Overflow is always a problem -- because you don't have infinite resources (and can't constrain the operations that one might want to perform). You don't need a BCD data type. Nor do you need any particular "esoteric" feature(s) in the CPU to support it. But, the LANGUAGE in which you craft that software package can go a long way towards making this easy to implement -- or ghastly difficult!
 
C's biggest problem (IMnsHO) is the lack of strong type checking.
You just have to do that yourself. C allows strict typing but you can choose to ignore it because not every target platform requires it. This is the bias on the recent C/C++ vs rust discussion. They just don't get what low-level means. On a offline embedded system, you might not need to prevent a buffer-overflow exploitation because nobody can touch that memory anyway.
I think it's mostly app-makers, not aware of being in a virtual sandbox, far away from the machine.
 
I don't want to pick the most suitable language for the task. I want one language for the widest variety of tasks possible. Right now only Common Lisp does that.

That doesn't mean it should cover what shell scripting does, or that I can use it in the kernel. Although there have been a number of attempts to do a Unix scripting domain-specific sub-language (for the same compiler) in Lisp, usually Scheme. I'm actually not sure whether Common Lisp currently has anything. The last time Lisp in the kernel was possible were the Lisp Machines, which aren't coming back anytime soon thanks to the *&#*&#!! situation around the Symbolics source code.

But anyway, I don't want 4 languages between scripting and kernel. Because then I cannot re-use code. I like to keep code and re-use it. That is particularly important if you wrote such code properly, with automated tests and whatnot. Doing all that over in the next language leads to sloppiness. And the next language's implementation usually only has a subset of the functionality. I might be getting paid by hour (effectively), but that's just tedious.
 
You just have to do that yourself. C allows strict typing but you can choose to ignore it because not every target platform requires it.

No. You've missed the point. I want to create two new types:

typedef int Apple
typedef int Orange


In a strongly typed system, I shouldn't be able to do:

Apple Macoun
Orange Valencia

MakeApplesauce(Valencia)


C just considers type declarations to be synonyms for the underlying type. I.e., Macoun and Valencia are really just ints, in this case, despite my wanting to give them different type names.

On a offline embedded system, you might not need to prevent a buffer-overflow exploitation because nobody can touch that memory anyway.

Actually, buffers should always be treated as exploitable because, often, they ARE exposed. E.g., if the user is allowed to type in a "product identifier" so I can use that to label the cartons of "product" passing on the conveyor, do I trust that he will not type in too many characters? Ditto with communications ports that interact with other machines -- am I sure they won't give me data that I can't handle? If you allow a mag stripe reader to source data, are you sure that it won't provide more data than you expect? That an adversary won't hack the connection to deliberately exceed your expectations of what's "valid"/expected?

But anyway, I don't want 4 languages between scripting and kernel. Because then I cannot re-use code. I like to keep code and re-use it.

So, I should write my OS in the same language that I use to write the device drivers that tie into it, the applications that run atop it, the interface to the RDBMS and the scripts that users want to be able to write? That would lead to either an overly complex language or a crippled one. Either way, getting the job done would be tedious, at best. Try using Ada to rename all files of the form <name><number> as <number><name>, instead.

Learning a new language is usually a trivial task. And, one wants to reuse designs much moreso than actual code. Using a new programming paradigm, is considerably harder. Learning how to deal with the illusion of concurrency that multitasking (and languages that directly support it) still stumps people -- because their minds are single-threaded. Learning how to deal with true concurrency (e.g., multiple cores) is an even bigger hurdle. And, the whole notion of distributed systems just leaves them blabbering ("What do you mean, the function might FAIL to be invoked? But, it's the next line of code... how can it NOT be invoked??!")

Four languages is almost a prerequisite (in my line of work). I write a fair bit of ASM in the bowels of the kernel; most of the kernel and applications in C, interface to the RDBMS in SQL and "script" in Limbo. (I should also support something "lighter" than Limbo but that would be too much to put on the users)
 
No. You've missed the point. I want to create two new types:

typedef int Apple
typedef int Orange


In a strongly typed system, I shouldn't be able to do:

Apple Macoun
Orange Valencia

MakeApplesauce(Valencia)
That does work? The function declaration must contain the type too, right?
You can ignore all types in C by just referring to bytes all the time. That makes it not strongly typed?
 
Back
Top