For me, it's the other way round. I prefer C++ over C because it makes things easier.I try to avoid C++. It is unnecesarily complicated especially latest versions. Prefer plain C...
For me, it's the other way round. I prefer C++ over C because it makes things easier.I try to avoid C++. It is unnecesarily complicated especially latest versions. Prefer plain C...
Yes that works. Indeed, a typedef just introduces an alias name, not a new type. Of course, the function declaration must declare the types, but stillThat does work? The function declaration must contain the type too, right?
Apple
and Orange
are just different names for the same type here.struct
is a new type, but still two structs with identical "bodies" are compatible as soon as they are declared in different compilation units ... (I mean, it makes sense because there's no type information in the ABI, but it sounds quite weird at first).Actually no, you can't. You can always access any byte of any object in C because the language allows aYou can ignore all types in C by just referring to bytes all the time. That makes it not strongly typed?
char *
pointer to alias any other pointer type. But that won't get you far, because the actual representation of types are implementation defined. An int
on your typical amd64 looks very different in memory than it does on, say, an MC68000.The compiler sees this as:That does work? The function declaration must contain the type too, right?
You likely work in a resource-rich environment -- where execution time and available memory are "without cost".I prefer C++ over C because it makes things easier.
I think a way to bypass type checking doesn't make a compiler "weakly typed".Yes that works. Indeed, a typedef just introduces an alias name, not a new type. Of course, the function declaration must declare the types, but stillApple
andOrange
are just different names for the same type here.
C even has the concept of compatible types. A declaredstruct
is a new type, but still two structs with identical "bodies" are compatible as soon as they are declared in different compilation units ... (I mean, it makes sense because there's no type information in the ABI, but it sounds quite weird at first).
Actually no, you can't. You can always access any byte of any object in C because the language allows achar *
pointer to alias any other pointer type. But that won't get you far, because the actual representation of types are implementation defined. Anint
on your typical amd64 looks very different in memory than it does on, say, an MC68000.
C's notion of types is just an illusion, for the most part. One wants to be able to create new types so you can leverage the compiler's ability to ENFORCE restrictions on the sorts of things a coder could do -- like making applesauce out of oranges.I think a way to bypass type checking doesn't make a compiler "weakly typed".
Isn't everything that abstracts away from the hardware architecture an illusion? I mean, in the end your whole program structure is made of AND, OR and XOR with an instruction pointer to remember where you're at.C's notion of types is just an illusion, for the most part. One wants to be able to create new types so you can leverage the compiler's ability to ENFORCE restrictions on the sorts of things a coder could do -- like making applesauce out of oranges.
C++, OTOH, would prevent me from making "applesauce" out of oranges and "orange juice" out of apples. But, getting this level of protection comes at too high of a cost!
This is kind of the approach that the "post-UNIX" industry took at first. "C for everything"So, I should write my OS in the same language that I use to write the device drivers that tie into it, the applications that run atop it, the interface to the RDBMS and the scripts that users want to be able to write? That would lead to either an overly complex language or a crippled one.
Isn't everything that abstracts away from the hardware architecture an illusion? I mean, in the end your whole program structure is made of AND, OR and XOR with an instruction pointer to remember where you're at.
No reason to enforce limitations to this with prescribed programming methods and languages. They might like it at Google, MS and Oracle, though.
Though I believe a decent compromise was quickly made with (close) supersets of C. For example, whilst the kernel and drivers are written in C, the DBMS could be written in C++. Importantly, the DBMS in C++ could directly interface with the operating systems C libraries. This part is key.
So, I should write ... the interface to the RDBMS ....
That's not an abstraction, that's a simple translation/representation. An abstraction would be defining them both as "fruit", and using it in a generalized functionAbstraction is saying that an int can represent an Apple. Or, an Orange.
MakeJuice(fruit)
, which can create apple juice or orange juice depending on the type of fruit you shoved in it.The abstraction is that they are both objects -- with ints being a chosen way of referencing them. One could have used floats, strings, some complex data type, etc. to the same result. The example could just as easily have been:That's not an abstraction, that's a simple translation/representation. An abstraction would be defining them both as "fruit", and using it in a generalized functionMakeJuice(fruit)
, which can create apple juice or orange juice depending on the type of fruit you shoved in it.
All different representations, a computer doesn't know (or cares) what an apple or orange is, so we represent them in a way a computer can deal with it. You're not abstracting here, especially if you stick it in a specific function designed for one specific value (or type) of object. The abstraction happens when you use a generalized function that works for any type of fruit, or even any object. The 'function' of making juice is the same for all fruits, you squash it, then filter out the solids.The abstraction is that they are both objects -- with ints being a chosen way of referencing them. One could have used floats, strings, some complex data type, etc. to the same result.
A generalized functionAutomobile Mercedes
MakeApplesauce(Mercedes)
[Clearly Apples and Automobiles have no underlying commonality -- beyond being objects/things]
MakeJuice
would have no problem with this. The result won't be pretty obviously, but the function itself would still work the same without having to modify it to accommodate an entirely different kind of object. Heck, somebody might find getting "car juice" useful in the future.A computer knows nothing of "integers" floating point numbers, characters, strings, etc. These are all abstractions that we (as humans) map onto "collections of bits" which, themselves, are abstractions of "states of charge" in an electrical circuit.All different representations, a computer doesn't know (or cares) what an apple or orange is, so we represent them in a way a computer can deal with it. You're not abstracting here, especially if you stick it in a specific function designed for one specific value (or type) of object. The abstraction happens when you use a generalized function that works for any type of fruit, or even any object. The 'function' of making juice is the same for all fruits, you squash it, then filter out the solids.
Just because the code doesn't CRASH doesn't mean the code is correct or not "problematic". What does MakeApplesauce(23) *mean* -- to a human? How do you map that onto a meaningful concept (i.e., abstraction)?A generalized functionMakeJuice
would have no problem with this.
Good point but in some ways that makes it even easier as a decision. Almost any interface (such as libmysql, liboci8, etc) is (and should be) written in C or exposed as a C API. The simple reason is that almost any language has some sort of interop with C.You misread my post:
Common general purpose CPUs actually do know of these. That's how they work, by manipulating numbers, that's the only thing they can do. Although manipulating floating point numbers was once done on an entirely separate co-processor it has since been integrated in the CPU itself. And you may find certain architectures take a more modular approach (RISC-V for example), but the one thing they are all capable of is manipulating discreet numbers.A computer knows nothing of "integers" floating point numbers
These are typically represented by discreet numbers using some form of encoding standard, ASCII, EBCDIC or unicode for example.characters, strings, etc.
Computers are stupid machines, it does exactly what you tell it to do, it's correct to a fault. If the result isn't what you expected or when the computer does something wrong, it simply means you, as its programmer, made a mistake (logic errors for example).Just because the code doesn't CRASH doesn't mean the code is correct or not "problematic".
Make applesauce from 23 apples? When I see a function defined this way I'm going to assume the argument is the amount of apples. Why would you provide a number representing an 'apple' as an argument to a function if the only thing it can do is create applesauce from apples.What does MakeApplesauce(23) *mean* -- to a human?
The crystal-language can easily create html & sql-queries in a few lines.Good point but in some ways that makes it even easier as a decision. Almost any interface (such as libmysql, liboci8, etc) is (and should be) written in C or exposed as a C API. The simple reason is that almost any language has some sort of interop with C.
I.e Getting Java <-> Python to share libraries is a pain in the butt.
In terms of an interface for communication between the DBMS, as you were referring to, then SQL is more like a protocol. Yes, I absolutely would not like to use C or C++ for that.
Things like SQL or HTML, these tend to not quite come under the same rules as typical languages. They aren't really "programming" languages as such.
Code generators have always been popular for that kind of stuff.The crystal-language can easily create html & sql-queries in a few lines.
For HTML when I begrudgingly have to make simple web pages, I tend to use M4.
Are they *integers*? That's a concept that we have mapped onto the contents of certain registers. E.g., if I am manipulating an ADDRESS register, then I'm likely manipulating an ADDRESS. In my gesture recognizer, all of the quantities manipulated are fixed point (Q10.6) "values". Because that's how I interpret the results of the calculations that I perform on them (fitting cubic beziers). Not floating point, not integers.Common general purpose CPUs actually do know of these. That's how they work, by manipulating numbers, that's the only thing they can do. Although manipulating floating point numbers was once done on an entirely separate co-processor it has since been integrated in the CPU itself. And you may find certain architectures take a more modular approach (RISC-V for example), but the one thing they are all capable of is manipulating discreet numbers.
The programmer didn't provide a "number". The (C) compiler interpreted it as a "number" because the C compiler has no way of enforcing the distinction between ints and Apples. To it, they are the same! If it were more strongly typed, then it would have thrown an error as an int would not have been an acceptable argument to the function -- which was the point of my post (neither would an Orange have been permitted).Make applesauce from 23 apples? When I see a function defined this way I'm going to assume the argument is the amount of apples. Why would you provide a number representing an 'apple' as an argument to a function if the only thing it can do is create applesauce from apples.
Of course SQL is a language. It has a grammar. It has an abstract machine on which it operates. It is Turing complete. The same is true of PostScript (though folks rarely WRITE PS code.In terms of an interface for communication between the DBMS, as you were referring to, then SQL is more like a protocol. Yes, I absolutely would not like to use C or C++ for that.
Ultimately there isn't too much difference between it and what we send for, i.e the IRC protocol, requesting data. This uses grammar and could be over-engineered enough to utilise an abstract machine.Of course SQL is a language. It has a grammar. It has an abstract machine on which it operates. It is Turing complete. The same is true of PostScript
I kind of see what you are saying. But in the same way, they are really just configuration files in the same way that .ini or .json are. This is a completely different application domain to programming languages. Thus they are not really a good argument against "just write everything in C".The point was that these languages address specific application domains -- effectively. Eliminating them in favor of some "more common" language would be a disservice to the folks tasked with these problems.
But you can "write code" in SQL. You can manipulate the state of the database and data within in much the same way that you are manipulating bytes in memory (using a conventional language).The topic title mentioned "programming languages" which, general purpose or not, does tend to exclude SQL or HTML.
You underestimate them.I kind of see what you are saying. But in the same way, they are really just configuration files in the same way that .ini or .json are. Programming everything in C doesn't exclude them in the same way we don't program image files in C.