FreeBSD Dynamic Programming

Prefixing variable names with "my" is laughable imho.
Now we're all just peeing in a pot and claiming it's raining. My example was simply following what kpedersen used except for my use of camel case (simply because current $WORKPLACE coding standards....)

Yes, you're right. Should be a lower case "C". I just copy n pasted without changing it since i am lazy.

And a realife example of the perils of cut and paste :)
 
Well I agree and think descriptive class *and* variable names are important. So...

Code:
ClassContainingIdNameEmployeeNumberDepartmentPointerAndManagerId classContainingIdNameEmployeeNumberDepartmentPointerAndManagerId = findEmp();

This codebase would be a joy to read!

(I only really use "my" in names for trivial code examples where you don't have (or need) the context to know if it is a parameter, a member or neither. It is mostly pointless though admittedly)
 
Although if as part of a corporate merger one was responsible doing something with 2 different employee databases, variable names such as "myEmp and theirEmp" would make some sense.


Ok, it would be "nonsense" but....
 
thanks for the comment my friends i am grateful to you. I will make a special recommendation for you

I hope you guys have a good read and maybe even thank me for it, today it's hard for anyone to extend knowledge to someone else, as I'm an old-timer I appreciate knowledge of course it makes us effective.

 
if there was a way to approach it I just think it would be boring. the linux kernel is boring in itself no offense but i'm tired of having to complain about the same things..

when will someone think differently developing the linux kernel, and giving the voice to the linux user. maybe they want to keep it hidden from their users
 
all i hate is something in particular that something that goes against philosophy. maybe some developer will make a suggestion of how to develop and how to optimize the code be crucified for this. our freedom to think about how things are made does not exist. I thought very well when I entered this area I knew it was so.
the world does not work as we think, that we can defend our interests, our philosophy and our cultures.
 
Grumpy old software architect here, I never heard of "dynamic programming", and I don't think I really missed anything so far...

I did see lots of "awesome" ideas go down the drain over time, though. To name just some: SOA and SOAP, and maybe also, to some extent, OOP, at least as people started to realize deep inheritance trees (especially if behavior is inherited as well) are the new spaghetti code, well on par with the worst stuff you could create with "goto" as your only control structure ?. Ok, that didn't kill OOP, but it changed what's considered a good OOP design.

I wish people in the business would stop inventing buzzwords. Sure, you have to name your idea somehow. Most of the few software design principles that proofed to be of lasting value have descriptive names, though, like e.g. the single responsibility principle.

(No, I don't judge this concept to be bad here. I just learned some scepticism :cool:)
 
Grumpy old software architect here, [...], well on par with the worst stuff you could create with "goto" as your only control structure ?.
While I might not be as old as you I've been told that I'm certainly decades ahead on the grumpy-ness. But is it really my fault that most things seem to truly suck these days?
Anyway, I came here as I'd like to argue that there are still cases where using goto is the superior solution (yes, also in terms of readability).

Academically, new programmers are told very early on that "goto is bad" but often no explanation for this is given, especially in the basic courses/lectures. This managed to get to a point where novice programmers seem to be simply scared of goto. They look at your code, they see a goto somewhere and they react like there's a pile of dead human bodies in your drawer.
I know a lot of "less experienced programmers" that truly think that there is something wrong with goto on a technical level, that it creates incorrect instructions or whatever. But that is simply not true. The main (and as per my current knowledge only) reason for goto to be considered a bad thing is that it very easily allows to create nontransparent code that is hard to read and maintain. But as with most things: If used incorrectly, it sucks - yeah. But that doesn't mean that the use of goto itself is bad or forbidden.

One place where I still find myself using goto is in initialization sequences that have non-trivial clean-up sequences attached to them (eg. multiple stages/phases of initialization where you have to perform the cleanup in a particular sequence if you have to abort mid-way).
I think the poster child here are drivers. There, properly used goto makes for much easier to read - and I'd argue safer - code compared to not using goto.
 
jbodenmann, if we're really discussing that, my take on it is: goto is a huge gain in some (often idiomatic) cases when the language is somewhat close to machine code (e.g. requires explicit resource management). In other words, it has a few very good uses in C and, to a lesser extent, C++.

Looking at other languages, for example C#, in all my time writing C# code, I didn't find a single good use for goto. For your typical scenarios, you're much better off using try-finally and/or using blocks. In C++, you could do a similar thing if you want to wrap everything with exceptions and strictly follow RAII. But that's another story (IMHO the combination of exceptions with explicit resource management is a broken concept), I don't think that's always a good idea. In C, trying to dogmatically avoid goto where you'd normally need it just leads to weird things like abusing other control structures in creative ways, so yes, unreadable code.
 
this week i had a nasty argument with a russian friend of mine because he thinks perl is an outdated language i thought it was a punch in the gut. I put my opinion on Perl and I also consider goto something that involves us a lot with algorithms of how to actually enter a project. it's not like walking down a dark alley.

today this generation of programmers they never knew how to distinguish a superior software designer and how this is possible. math algorithms 24 hours of study don't take your college professor's ideas as dogmatic if you stick to one thing. I like Pythagoras because he says something far beyond math and algorithmic calculations
 
I'll see your goto and raise you a setjmp/longjmp :)

As "YAGOP/SA" any tool becomes bad when used incorrectly. Screwdrivers to open paint cans? No, they make can openers for that.
My opinions:
OOP is good if it makes you stop and think: you design something correctly and implement it in whatever language and prove correctness.
Why do all the "new and improved with a free set of knives" languages compare themselves to C/C++?
I have a book somewhere, title is something like "Data + Algorithms = Programs". To the best of my knowledge, that is still true.
So think about the problem, design the solution, implement it correctly in whatever language and who cares what anyone else says.
 
[...] he thinks perl is an outdated language [...]
What is an outdated language anyway? Either it's the right tool for the job or it isn't. The only thing that changes is that more choices become available as time moves forward.
If the language (and surrounding ecosystem) are alive & well maintained it's just a language available that may or may not be the right tool for the job.

C is outdated and I still write C code every damn day for fun and profit. Why? Because it actually isn't an outdated language but rather happens to be the language of choice for some situations.

This is in my opinion similar to "BSD is dying". Just because there are more options available compared to 25 years ago doesn't mean that BSD is dead. It might just be that for a particular situation another OS might be the preferred choice (which does not necessarily need to be a technically driven decision).
Let alone the fact that these days orders of magnitudes more people actually use computers. Those people might have different requirements & preferences which might make BSD the non-chosen option but that doesn't mean that BSD is dying. I'd even guess that the absolute number of FreeBSD has increased over the years. Just realtively there might be a decline because more people use more different non-(Free)BSD operating systems. FreeBSD is still doing well despite that.

Newly available options (eg. new languages or new operating systems) don't automatically make the previously existing stuff outdated or obsolete.
 
What is an outdated language anyway? Either it's the right tool for the job or it isn't. The only thing that changes is that more choices become available as time moves forward.
If the language (and surrounding ecosystem) are alive & well maintained it's just a language available that may or may not be the right tool for the job.

C is outdated and I still write C code every damn day for fun and profit. Why? Because it actually isn't an outdated language but rather happens to be the language of choice for some situations.

This is in my opinion similar to "BSD is dying". Just because there are more options available compared to 25 years ago doesn't mean that BSD is dead. It might just be that for a particular situation another OS might be the preferred choice (which does not necessarily need to be a technically driven decision).
Let alone the fact that these days orders of magnitudes more people actually use computers. Those people might have different requirements & preferences which might make BSD the non-chosen option but that doesn't mean that BSD is dying. I'd even guess that the absolute number of FreeBSD has increased over the years. Just realtively there might be a decline because more people use more different non-(Free)BSD operating systems. FreeBSD is still doing well despite that.

Newly available options (eg. new languages or new operating systems) don't automatically make the previously existing stuff outdated or obsolete.
I personally don't think BSD is dying. what is really dying is people's ability to understand the needs of each industrial sector, today students want something ready, so that they can just enjoy the algorithm that we spend day and night implementing
 
  • Like
Reactions: mer
Anyway, I came here as I'd like to argue that there are still cases where using goto is the superior solution (yes, also in terms of readability).
It is very easy to see that "goto" is more expressive and hence better readable if one do not abuse of it.

It is very simple to implement a "while" or a "for" with "goto", and you get something readable, but of course,
more readable are "whiles" and "fors". We cannot say the opposite. To implement an arbitrary "goto" with
other control structures, you can have a big switch inside a while encompassing the whole program, that is not
very readable. Normally one uses flag variables to avoid a "goto", that is also an artificial solution.

As someone that learned programming with old pocket calculators, with FORTRAN IV, this is clear to me.
I really do not know what people thinks that learned "structured programming" from the beginning, that were
told that gotos is bad stile.
 
  • Like
Reactions: mer
As shown, there’s a significant increase in the number of times our function is called. Similar to our previous example, the algorithm’s performance decreases exponentially based on the input size. This occurs because the operation does not store previously calculated values.
Try this:
Code:
(define (fib n)
  (if (< n 2)
     n
     (let* ((k (quotient (+ n 1) 2)) (fk-1 (fib (- k 1))) (fk (fib k)))
         (if (even? n) (* fk (+ fk (* 2 fk-1))) (+ (* fk fk) (* fk-1 fk-1))))))
This recursive version is likely faster than your memoized version for large n! Try something like
Code:
(time (begin (fib 10000000) #t))
in gsi-gambit from gambit-c.
 
Try this:
Code:
(define (fib n)
  (if (< n 2)
     n
     (let* ((k (quotient (+ n 1) 2)) (fk-1 (fib (- k 1))) (fk (fib k)))
         (if (even? n) (* fk (+ fk (* 2 fk-1))) (+ (* fk fk) (* fk-1 fk-1))))))

but this recursion you used for this specific problem, now try again change the positions there is always a way to optimize with the right algorithm.
 
Try this:
Code:
(define (fib n)
  (if (< n 2)
     n
     (let* ((k (quotient (+ n 1) 2)) (fk-1 (fib (- k 1))) (fk (fib k)))
         (if (even? n) (* fk (+ fk (* 2 fk-1))) (+ (* fk fk) (* fk-1 fk-1))))))
This recursive version is likely faster than your memoized version for large n! Try something like
Code:
(time (begin (fib 10000000) #t))
in gsi-gambit from gambit-c.
now change the position if performance is the case then develop a more effective algorithm for this problem that is the goal of this thread
 
Academically, new programmers are told very early on that "goto is bad" but often no explanation for this is given, especially in the basic courses/lectures. This managed to get to a point where novice programmers seem to be simply scared of goto. They look at your code, they see a goto somewhere and they react like there's a pile of dead human bodies in your drawer.
Well, this is the achievement/fault of this man here:
440px-Edsger_Wybe_Dijkstra.jpg

Please meet Edsger Wybe Dijkstra (*11/04/1930, +05/08/2002), a computer scientist from the Netherlands and one of the most influential ones of the founding days. He's quite well known for his achievements in computer sciences, and gained the Turing Award already back in 1972.

Amongst his works are the Dijsktra Algorithm, which determines the shortest path between nodes in a graph, creation of the ALGOL-60 compiler, and more.

To quote the Wikipedia: One of the most influential figures of computing science's founding generation,[2][3][5][6][12][13] Dijkstra helped shape the new discipline both as an engineer and a theorist.[14][15] His fundamental contributions cover diverse areas of computing science, including compiler construction, operating systems, distributed systems, sequential and concurrent programming, programming paradigm and methodology, programming language research, program design, program development, program verification, software engineering principles, graph algorithms, and philosophical foundations of computer programming and computer science. Many of his papers are the source of new research areas. Several concepts and problems that are now standard in computer science were first identified by Dijkstra or bear names coined by him.[16][17]

As you might have noticed, he also worked on programming paradigms and was a fan of structured programming. Wearing this hat he wrote a very influential article named "Go to statement considered harmful" back in 1968 (Link to scan of that original | Link to HTML version). Actually this is one of the most influential opinion pieces in computer science ever.

So whoever is lecturing at least should have the courtesy of pointing to that classic article, which initiated later a myriad of "$STUFF considered harmful" opinions. It's also a short read, only two sheet of paper.
 
Back
Top