Operating systems in academic settings

Never got call-by-value vs call-by-reference, even though that was covered in my curriculum.
Call by value resolves the argument and passes it's value to the subroutine/function (so, the function gets a COPY of the argument)
Call by reference, passes a reference/pointer to it to the subroutine/function.

The latter is particularly useful for passing large objects/structs because a pointer is small -- it fits in a processor register whereas the thing that it references may be kilobytes or larger.

With a reference to the parameter, the called function can alter the original! When the value of the original is passed, it's just a copy -- so the original is "safe".

Additionally, in a multithreaded environment (or, in any case where some other actor can access the original while the function is also accessing it), there is a risk in call by reference that one or the other could make changes that corrupt the other's view of the parameter -- because there is nothing to ensure atomic access for either!

E.g., I pass a pointer to a string -- "31 Oct 2024" -- to a function. Before the function gets to act on it. another thread starts to change it to reflect the NEW date -- "01 Nov 2024". But, only manages to change the first character before the function gets its chance to peek at it.

When the function accesses it, the string is "01 Oct 2024" -- which is almost certainly NOT what the function's caller expected it to be when the function was invoked. It's also neither of the two values that seem possible (31 Oct or 01 Nov)

Similarly, if the function changes the string before that other thread gets a chance to look at it, the thread will see some/all of the changes instead of the original.

[While a silly example, the point being you can't predict the actions/interactions of either actor because they are sharing an object (the string) without any mechanism -- e.g., a mutex -- to arbitrate their accesses to it. And, the problem is probably not going to manifest itself reliably so you'll be troubleshooting an intermittent]

Imagine if the passed (shared!) object is "substantial" -- like a frame of video on which you are expecting the function to perform "scene analysis". While the function is "looking at the picture", something is reusing that "buffer" to store the next image. So, it is likely that the function will see part of the previous image and part of the new image -- and be unaware that the scene is in an inconsistent state (neither previous nor new).

How will this manifest? Will other actions by the program be confused because of the misanalyzed scene results? Will the user be able to identify this as the cause of the problem he is experiencing? How many kilobucks will you spend trying to figure out that this is the cause of the "bad behavior" before you can actually fix it?
 
Most languages don't have passing arrays or struct instances as value.

In C++ you also call the copy constructors when you do this, so it can be very expensive.
 
Most languages don't have passing arrays or struct instances as value.

In C++ you also call the copy constructors when you do this, so it can be very expensive.
There's nothing preventing it. C allows structs to be passed to -- and from -- functions. It's up to the developer to be aware of how big the stack frame will become in such an instance. If you're only going to use a portion of the struct, it's best to just excise the necessary members and pass them individually.

That, of course, may not be possible (esp if the function is responsible for deciding what it needs from the struct!)

There are also "hacks" that you can use to effectively give call-by-value protections to references passed (e.g., CoW).
But, they typically require specific runtime support from the OS.
 
Don Y : Thanks for the explanation, it actually makes sense to me now. It does point out the weaknesses of that design. My take is that this was dictated by the available hardware. On an 8-bit PDP-10, it made sense to know the difference. On a modern Threadripper or an Epyc setup, with a 64-bit OS on it, one needs a different programmatic approach to take advantage of the hardware.
 
A(t least some a)rray languages do copy on write. So the array argument *appears* to be call by value, though in reality it is passed by reference! For example:
Code:
$ k
...
  a:1 2 3 4        / assign an array of 4 values to a
  f:{x[2]:11; x}   / x is the default name for the first arg. Assign 11 to x[2] and return x
  b:f a            / Pass a to function f and assign the result to b
  a
1 2 3 4
  b
1 2 11 4

This is defined by language semantics, no support from the OS needed.
 
On my school we last week was using 10 routers and 10 switches all from Cisco. I think they are running FreeBSD internally. You will find Linux, FreeBSD and other opensource operating systems everywhere. Different systems for each job. Operating systems is for me just a tool But I have my own preference: :) FreeBSD
 
Last edited:
A(t least some a)rray languages do copy on write. So the array argument *appears* to be call by value, though in reality it is passed by reference! For example:
Code:
$ k
...
  a:1 2 3 4        / assign an array of 4 values to a
  f:{x[2]:11; x}   / x is the default name for the first arg. Assign 11 to x[2] and return x
  b:f a            / Pass a to function f and assign the result to b
  a
1 2 3 4
  b
1 2 11 4

This is defined by language semantics, no support from the OS needed.
Yeah, this is where I got lost in semantics so bad, I decided it's not worth the effort any more to learn the difference. Just learn a bit of the language to calculate the result in a reasonable amount of time, and be done with it. And if the language is discovered to have a bug that affects trade and foreign policy - not my problem, I'll just use something else that the boss told me to use. :P
 
Maybe you all have your priorities wrong. Perhaps you should've studied life sciences first and then branch out. What good is it to only know computers when HALF the country is obese, for example, and your health is under attack?
 
Maybe you all have your priorities wrong. Perhaps you should've studied life sciences first and then branch out. What good is it to only know computers when HALF the country is obese, for example, and your health is under attack?
Because somebody needs to be technically competent and to know how to manage information and how to use relevant tools, and to know good tools from bad ones? This is important even in a health care crisis, y'know.
 
This is defined by language semantics, no support from the OS needed.
And how does the language protect against another thread (written in the same or a different language) trying to access that same object while it is fiddling with pointers?
The reason you want to use call by value is to address the case where <something> else can also be active in the context; something whose access patterns you can't control or could be naive about.
 
And how does the language protect against another thread (written in the same or a different language) trying to access that same object while it is fiddling with pointers?
The reason you want to use call by value is to address the case where <something> else can also be active in the context; something whose access patterns you can't control or could be naive about.

Array languages that provide parallelism do so at a higher level and can use more cores or more processors or even more nodes (connected via some comm. protocol). Inherently many array operations can be done in parallel but how the available parallelism is used is left up to the user. See for example:
View: https://www.youtube.com/watch?v=JvLWvyG7JEs&t=630s
 
Array languages that provide parallelism do so at a higher level and can use more cores or more processors or even more nodes (connected via some comm. protocol). Inherently many array operations can be done in parallel but how the available parallelism is used is left up to the user.
Short of "academic exercises", I can't think of a project (or product!) that I've written in ONE language in half a century. So, any guarantees a language makes are usually worthless, in the grand scheme of things (regardless of the issue being "solved"). This is a lesson language designers fail to understand.

When the OS provides these protections, then the language -- and the developer -- is freed from that responsibility.

E.g., creating separate, protected process containers means a language can be sloppy in how it allows pointers to be resolved -- because any damage will be confined to the offender's process space and not jeopardize anything else in the system. Catching traps (arithmetic, etc.) similarly confines the "offense" to the "offender". Etc.

This, regardless of the language that the developer used ATOP THE HARDWARE.
 
You would think that BSDs are popular in computer science labs because it is much easier to find anything in there than in Linux and a typical Linux distribution.

Don't they have homework of the style "change this to do that" when dealing with OSes?
 
You would think that BSDs are popular in computer science labs because it is much easier to find anything in there than in Linux and a typical Linux distribution.

Don't they have homework of the style "change this to do that" when dealing with OSes?
Linux was popular because it came 'pre-configured' for the university environment... with a BSD, you have to do a lot of that config work yourself.

And yes, they did have that kind of homework in my day, although it was limited to Nachos, a toy OS written in Java. It was a toy, and homework consisted of writing some missing OS components yourself. For me, it was difficult to figure out exactly where in the source .java file I was supposed to stick the code snippet that I wrote. And it turned out I kept picking the wrong source file to edit! 😭
 
For me, it was difficult to figure out exactly where in the source .java file I was supposed to stick the code snippet that I wrote. And it turned out I kept picking the wrong source file to edit
This is a universal truth when it comes to software "products". There are no "road maps"; "where is the first line of executed code? Then, where are the rest?" "Which commands do I need to learn to use the machine?"

In "mechanisms", you have "exploded parts diagrams" so you can see the relationships of the "components" to each other. That doesn't exist in most software products. You have to be in-the-know... before you can KNOW!
 
This is a universal truth when it comes to software "products". There are no "road maps"; "where is the first line of executed code? Then, where are the rest?" "Which commands do I need to learn to use the machine?"

In "mechanisms", you have "exploded parts diagrams" so you can see the relationships of the "components" to each other. That doesn't exist in most software products. You have to be in-the-know... before you can KNOW!
:rolleyes: Well, the class did have some pre-requisites. After spending some time troubleshooting my own FreeeBSD-based installations, at this point, I probably could go back and figure out what I missed in that class, and complete the homework...
 
Y'all are mostly talking about university-level academic institutions. For high-school (and I guess now even down to the elementary-school level), the argument has always been that Windows and Microsoft Office are what people are going to see when they get out in the world, so that is what they should learn in school, so they can hit the ground running.

While there is some truth to this, at least for non-CS people, both Windows and Office are evolving in a way that is not so good for schools. Every time you try to do anything, they want you to use AI to do it, and now they apparently have an opt-out thing in Word that lets them use anything you type to train their AIs. I am not sure this is what you want in a school setting. (I have used Microsoft Office for many years, and I just finally got fed up with them and switched to LibreOffice.)

I'm thinking it may be time for schools to use Linux (although I love FreeBSD dearly, it is sometimes too "some assembly required" for schools without a lot of support) and LibreOffice. Although LibreOffice does not work exactly like Word, it is pretty close, and it can produce Microsoft-Office-compatible documents, which is good for anyplace that they insist on a docx or xslx file.
 
I'm thinking it may be time for schools to use Linux (although I love FreeBSD dearly, it is sometimes too "some assembly required" for schools without a lot of support) and LibreOffice. Although LibreOffice does not work exactly like Word, it is pretty close, and it can produce Microsoft-Office-compatible documents, which is good for anyplace that they insist on a docx or xslx file.
I don't think any of the FOSS OS's are "ready-for-prime-time", enough, for that environment. School districts are woefully inadequate at supporting hardware and software products. You need something that is stripped down to the essentials; ONE supported filesystem, ONE desktop/window manager, ONE browser, ONE productivity suite, NO "extra tools/services", NO config files, etc. Everything has to "just work". This is contrary to most FOSS that seems to strive to be "most flexible" (even if you hide the configuration files, they still exist... the device can "change" because it has support for that change as part of its very nature).

I've been developing a STEAM curriculum to supplement the "mainstream" curriculum in elementary/jr high/high schools. The idea is to teach programming (concepts, not a specific LANGUAGE) with "real world" problems. Not silly abstractions ("Let's write a program to compute the Fibonacci sequence!").

To that end, I've been repurposing laptops to be "teaching appliances". Stripping everything out of the laptop that isn't essential to teaching the courseware and allowing students to develop solutions thereunder. You want to surf the web? Go find someone else's computer. You want to draw pictures? See previous comment. Etc.

Initially, we plan to teach students how to program a "robot" to navigate a maze. Force them to think about how THEY would navigate a maze IF THEY WERE IN THE MAZE (with only their immediate surroundings "probe-able"). Then, expose them to a pseudo-language (that runs in the laptop) to command a virtual robot to explore a maze depicted on the screen. They can see their code executing (single step) and watch what the on-screen robot is doing to discover the flaws in their aproach.

One can then offer different mazes to show them how various algorithms can fail. Eventually, getting them to refine their INDIVIDUAL solutions to the point where they can solve a "random" maze generated by their laptop to exercise their solution (and let them experience conditions that they might not have previously imagined -- what if the starting point is IN the maze instead of at the periphery?? Ooops!)

You can imagine how you can "evaluate" the efficiency of their algorithms -- count the number of "instruction fetches" to solve a particular maze; count the number of robot motions, etc. This inroduces the notion that algorithms have costs and some can be better than other -- even if they ALL solve the maze!

[You can see a more advanced class teaching students to maintain and update a model of the maze (based on their robot's observations) and infer characteristics of the maze based on that knowledge.]

As (technically) "supporting" such a curriculum can quickly get costly (time), you want to minimize the number of things that can go wrong with their kit. AND, make it easy to restore/replace their "defective" device with something to keep them "up" (i.e., let all of their "files" reside on a thumb drive; no way to "save" anything on the laptop/appliance, itself).
 
While there is some truth to this, at least for non-CS people, both Windows and Office are evolving in a way that is not so good for schools. Every time you try to do anything, they want you to use AI to do it, and now they apparently have an opt-out thing in Word that lets them use anything you type to train their AIs.
Holy crap! 😲
I am not sure this is what you want in a school setting.
yeah, I agree with that take.

For high-school (and I guess now even down to the elementary-school level), the argument has always been that Windows and Microsoft Office are what people are going to see when they get out in the world, so that is what they should learn in school, so they can hit the ground running.
I'd say that universities are using the same argument when setting up their computer labs. There's public labs for all students (where that argument applies), and then there's specialty labs (GPU computing research, Linux labs, and more) that are department-specific.
 
Short of "academic exercises", I can't think of a project (or product!) that I've written in ONE language in half a century. So, any guarantees a language makes are usually worthless, in the grand scheme of things (regardless of the issue being "solved"). This is a lesson language designers fail to understand.
Your experience can't be generalized.

E.g., creating separate, protected process containers means a language can be sloppy in how it allows pointers to be resolved -- because any damage will be confined to the offender's process space and not jeopardize anything else in the system.
Array languages do not have (or expose) pointers.
 
Your experience can't be generalized.
Every (nontrivial) "device" that you use was likely written in at least two languages. Some portion in whatever assembly language the hardware defined and the rest in some high-level language.

Show me how to write a driver for a UART/NIC/display/etc. in your "array language" -- so that the rest of the application can ALSO be written in it.

And, if you think there are "few" such instances, look at the devices in the building you happen to be occupying at the moment. There are processors (and their "applications") in:
  • your keyboard
  • your mouse
  • your computer(s) (BIOS, drivers, newer NICs, each disk drive, each USB peripheral, etc.)
  • your microwave oven, stove, refrigerator, washing machine, dryer
  • your furnace, thermostat
  • your TV(s), HiFi(s), cordless phone(s)
  • your cell phone(s)
  • your car (upwards of 50!)
SOME of those (the most trivial) MAY have been written in a single language -- most likely assembly language. But, any of substance have been built from multiple languages.

Almost all will have at least two threads of execution: a foreground and a background -- as interrupts are typically used to address hardware interfaces with tighter timeliness constraints.

Imagine passing a pointer to a buffer of characters to a UART. THEN, altering some of the characters. Can you vouch for what actually gets put on the wire (given that it the characters may "wait" in that buffer for some period of time before the UART gets around to pushing them out the serializer)? If, OTOH, you passed this "by value", then as soon as the function/subroutine returns, you can reuse or alter the contents of that buffer with impunity.

What protections does your array language have against an ISR running on that device (to provide I/O) dicking with the array you just tried to alter?

Ans: None. Because the language wasn't designed with an awareness of anything OTHER THAN that language itself.

WRT "array languages exposing pointers", you've clearly missed the point. There are other actors BESIDES your array language running in that hardware! What protection do your arrays have from me (running in some other language that isn't constrained by the rules of YOUR language) stomping on the memory locations in which their values are stored?
 
Array languages such as APL, j, k etc., where operations apply to entire arrays. A language such as C that allows use of arrays but requires explicit looping is considered a scalar language.
 
Back
Top