rdevd a devd re-implemented on Rust

I don't think this is correct. Sure, you could argue from the design perspective: we have models that are just "lists of emojis", most of them read-only, and we have "floating" views to display them, all read-only. Qt's model/view classes would be a match. But things like negotiating sizes and positions must be done anyways, it's now the job of "item delegates" instead of something derived from QWidget. Only doing that for the part that's currently visible isn't a good solution either. When the UI still allows having hundreds of emojis on screen at the same time, this will result in a lot of work when switching to a different tab. Although I never tried the model/view approach here (it's just a LOT more work, there's no view available for a floating layout of items, and adding a custom widget layout was much more straight-forward), I did some experiments trying the same (only initialize/render what's visible). As the event loop is blocked, unless you're doing dirty hacks with timers that just add a lot more overhead, even a delay of 500ms when switching tabs is very noticeable to the user and not acceptable to me. Possibly an implementation of drawing in an "item delegate" adds less overhead than QWidget does, but the problem itself remains...
The way which he proposed is the correct way, especially in your case. ;) You forgot that emojis are not images but UTF runes. It means that you can draw them like a normal text. Your whole algorithm can be made in QT, in small amount of code:
  1. Load all emojis into a buffer, can be done at start of the application.
  2. Draw a text widget.
  3. Put your emojis there.
  4. Profit.
This solution is a few times faster than proposed by you. Additionally, it solves other issues: you can easily resize emojis just by raising/lowering size of the font, mentioned by you redrawing problems, selection of emojis, or accessibility options.

Actually, my implementation here is generic, even the "type registry" works dynamically. It requires a certain amount of "boilerplate" and following conventions in each implementation of a class of course. But IMHO much more important, again: Most of the time, you should use very little inheritance anyways and just don't need polymorphism, and in these cases, implementations in C are straight forward, not requiring any "framework" at all.
That's not what frameworks ususaly works. The main reason to exists of frameworks is to reduce amount of work needed to do some task. So it shouldn't need a boilerplate code. ;) True that, it can be reached without any polimorphism, for example GTK library is written in C. But some high-level concepts, like namespaces and metaprogamming made work a lot easier. Again, less work == better.
 
The way which he proposed is the correct way, especially in your case. ;) You forgot that emojis are not images but UTF runes. It means that you can draw them like a normal text. Your whole algorithm can be made in QT, in small amount of code:
  1. Load all emojis into a buffer, can be done at start of the application.
  2. Draw a text widget.
  3. Put your emojis there.
  4. Profit.
This solution is a few times faster than proposed by you. Additionally, it solves other issues: you can easily resize emojis just by raising/lowering size of the font, mentioned by you redrawing problems, selection of emojis, or accessibility options.
Uhm. Seems there's some serious misinterpretation of at least parts of what was written so far.

First things first, of course the emojis are "just text" at some layer of the application. That's actually a must, because transferring an emoji to some other application (by faking keyboard input events, by X11 selections, ...) operates on text. There's no need to load that (except for the history), the emojis are just static data compiled into the application (generated from Unicode files at compile time).

But then, what's put on the screen is not text but bitmap/pixmap graphics. The de-facto standard library to "rasterize" glyphs from fonts is freetype, and normally you tell it which size you want and then query a rasterized (alpha) bitmap from it for some specific glyph. Then it's your job to "render" that pixmap in whatever way is suitable for your application. With (color) emojis, there's additional trouble ahead, fonts either contain them in PNG format (so, already a bitmap, and freetype can't scale bitmaps, that's your job then, and it's non-trivial for somewhat okay quality) or SVG format (scalable, but not implemented by freetype, you need to attach some separate SVG rasterizer by implementing an interface offered by freetype). When you use Qt, all of this is handled for you of course, but it still happens, and it's potentially expensive.

Then, what you write is more or less exactly what both my applications do, except every single emoji needs its own "UI element" (whether that's actually a widget or some "view item" that might come with less overhead in Qt's model/view framework is an implementation detail here) for obvious reasons: There's some interactive functionality attached to the emoji.

That's not what frameworks ususaly works. The main reason to exists of frameworks is to reduce amount of work needed to do some task. So it shouldn't need a boilerplate code. ;) True that, it can be reached without any polimorphism, for example GTK library is written in C. But some high-level concepts, like namespaces and metaprogamming made work a lot easier. Again, less work == better.
The point is, with a somewhat "clever" design (including conventions that must be met), you can reduce the boilerplate amount a lot, so it's pretty much acceptable in practice. You can't eliminate it, because polymorphism requires things C can't auto-generate (and hide), like e.g. a "vtable" or similar (holding pointers to the virtual methods). If, on the other hand, you can avoid polymorphism alltogether, you don't need any boilerplate. "Simple objects", although not a language construct in C, are straight forward with what the language offers.
 
First things first, of course the emojis are "just text" at some layer of the application.
Not for some layer, emojis always are of fonts. They are part of Unicode standard, not separated images. Any software to preview glyphs can confirm it, emojis ARE Unicode runes, same as letters, numbers, symbols, etc. That's why with some system settings, emojis are unavailable to render properly or look different on different OS'es. Simply fonts doesn't have glyph for them.

But then, what's put on the screen is not text but bitmap/pixmap graphics. The de-facto standard library to "rasterize" glyphs from fonts is freetype, and normally you tell it which size you want and then query a rasterized (alpha) bitmap from it for some specific glyph. Then it's your job to "render" that pixmap in whatever way is suitable for your application. With (color) emojis, there's additional trouble ahead, fonts either contain them in PNG format (so, already a bitmap, and freetype can't scale bitmaps, that's your job then, and it's non-trivial for somewhat okay quality) or SVG format (scalable, but not implemented by freetype, you need to attach some separate SVG rasterizer by implementing an interface offered by freetype). When you use Qt, all of this is handled for you of course, but it still happens, and it's potentially expensive.
That sounds like taken from very old handbook. ;) All True Type Fonts or Web Fonts, the most common used today are vector graphic. As you see, even on this forum, the system works pretty well with them. Freetype has full support for TTF. And it is not that expensive as you think. :) They work pretty well even on micro-controllers.

Then, what you write is more or less exactly what both my applications do, except every single emoji needs its own "UI element" (whether that's actually a widget or some "view item" that might come with less overhead in Qt's model/view framework is an implementation detail here) for obvious reasons: There's some interactive functionality attached to the emoji.
And here you are wrong. ;) Your applications do something absolutely different. Your solution is to draw directly on the screen everything, while solution presented by myself, is based on off-screen drawing (in-memory) and then drawing only needed part on a device. The second is around two magnitudes faster than yours.
Uhm. Seems there's some serious misinterpretation of at least parts of what was written so far.
Nope, it was polite attempt to explain, that the issue which you raised, wasn't inside a framework (in this case QT) or high level programming, but a classic PEBKAC. ;) I'm saying that from my own experience. I did the same mistake some time ago, and now I know what was wrong. Answer: not a framework fault. :p
 
Not for some layer, emojis always are of fonts. They are part of Unicode standard, not separated images. Any software to preview glyphs can confirm it, emojis ARE Unicode runes, same as letters, numbers, symbols, etc. That's why with some system settings, emojis are unavailable to render properly or look different on different OS'es. Simply fonts doesn't have glyph for them.
And the pixels appearing on screen is pure "magic", right? 🙄

That sounds like taken from very old handbook. ;) All True Type Fonts or Web Fonts, the most common used today are vector graphic. As you see, even on this forum, the system works pretty well with them. Freetype has full support for TTF. And it is not that expensive as you think. :) They work pretty well even on micro-controllers.
The most widely used emoji font, "Noto Color Emoji", has PNG glyphs (109 pixels). The "Twitter Color Emoji" font has SVG glyphs. TTF is just a container format, it typically contains scalable "outline" glyphs, but these don't support colors.

Regarding the "very old handbook":
And here you are wrong. ;) Your applications do something absolutely different. Your solution is to draw directly on the screen everything, while solution presented by myself, is based on off-screen drawing (in-memory) and then drawing only needed part on a device. The second is around two magnitudes faster than yours.
Qt doesn't even allow you to draw directly onto the screen. My solution without toolkit also (by default) always draws to off-screen pixmaps for performance reasons, although "directly to screen" is (kind of, depending how the X server manages window objects) possible with XRender. And of course it keeps "damage lists" and only draws out what's necessary. As does Qt. That's all completely unrelated here.

Nope, it was polite attempt to explain, that the issue which you raised, wasn't inside a framework (in this case QT) or high level programming, but a classic PEBKAC. ;) I'm saying that from my own experience. I did the same mistake some time ago, and now I know what was wrong. Answer: not a framework fault. :p
Maybe have a look at the code first? It's pretty obvious the Qt code leaves all the drawing stuff to Qt, only handling lables with "text" characters, while the code without Qt does the same of course, but contains all the other code necessary to be able to draw the actual pixels.
 
BTW, this recent development in this thread is a nice example how abstractions mislead people into thinking something was "trivial" that's all but trivial in reality. Getting some text onto a graphical display is one of these things. When using some modern GUI toolkit like e.g. Qt, you give your text to some widget and you're done, but there's a LOT of stuff that needs to happen additionally:
  • The text must be analyzed for types of script used, which e.g. includes writing direction (left-to-right, right-to-left, even top-to-bottom) and broken into parts as necessary if multiple of these types are contained. Libraries like ICU and fribidi can help here.
  • Fonts must be found containing all the needed glyphs for these "text runs". Truetype/Opentype fonts can contain at most 65535 glyphs, so it's even technically impossible to have a single font covering Unicode completely. fontconfig can help with finding the font you're looking for.
  • The text runs must be "shaped". This is mostly about exact positioning of the glyphs on the screen. The glyphs stored in fonts contain an "advance" value which tells how far you have to shift your position for drawing the next glyph, but that's only an approximation, often this is context dependent (see "kerning", some fonts have tables for that). Also, typesetting rules sometimes require certain combinations of glyphs to be replaced by "ligatures". And finally, there are "grapheme clusters" in Unicode, combining several codepoints into a single "character" (also used a lot with emojis), they must be identified and mapped to a single glyph. harfbuzz can do all this stuff for you.
  • Now you have a list of glyphs to draw and their positions on the screen. Every glyph must be rasterized in the exact pixel size you need. Your exact positions will contain fractions of pixels, so some "sub pixel" shift must also be applied for rasterizing. That's what freetype does for you. But as mentioned above, only for the classic "outline" glyphs. If the font already contains bitmap glyphs (e.g. PNG), they must be scaled by other means. For SVG, you need a separate SVG rasterizer, there's an example coming with freetype using librsvg for that.
  • Now you finally have a set of bitmaps with alpha channel, so the final step is composite rendering on whatever "background" you already have.
Don't get me wrong, it's very nice not having to worry about all that complexity when using some modern UI toolkit. It's still something to be aware of though.
 
BTW, this recent development in this thread is a nice example how abstractions mislead people into thinking something was "trivial" that's all but trivial in reality.
There are two aspects of this.

A "system" should be obvious, intuitive, trivial -- in appearance. A potential user shouldn't have to incur a high cognitive load to figure out how to approach a problem with a particular "system". "Complex is anything that doesn't fit in a single brain"

Behind the curtain, said system can be gloriously complex -- that's an issue for the implementor of the system and should not need to be exposed to the user. "What do YOU care how that is done? Just accept that it IS and rely on it!"

E.g., my current design "migrates" processes from one node (CPU) to another, elsewhere in the network. The developer doesn't need to know how (or why/when) this is done. Though he has to know that his reliance on timeliness guarantees has practical limits.

All he knows is that instruction 2 will execute after instruction 1 -- even if they happen to execute on different hardware devices.
 
Regarding the "very old handbook":
Which is enabled by default and always today. It isn't true to call it "optional". Same as a requirement for normal external drawing libraries. ;)
Qt doesn't even allow you to draw directly onto the screen.
Not true. Your approach is exactly drawing on the screen. It is a good approach when you have a small amount of data to present to the user. But when there is something more, like billions of records in a table, it simply hit the roof. That's your issue with showing a large amount of emojis. You create a lot of complicated structures (in C meaning), which have a lot of the same data, etc. Each of them is made separately and this mean a lot of tasks with all the fun like constant memory allocation and freeing. The second approach creates just one (or several more, but still no duplicated data) that structure and fill it with the data. It is made before the whole drawing process, that's why it is often called off-screen.
Maybe have a look at the code first?
I was looking on both, that's why I know what you do wrong there. And if we suggest looking into a code, KDE has an emoji picker, written in QT. ;)
 
Which is enabled by default and always today. It isn't true to call it "optional". Same as a requirement for normal external drawing libraries. ;)
What is "enabled by default"? Code to scale bitmap glyphs? Sure, UI toolkits contain such code. An SVG rasterizer? UI toolkits pull in librsvg for that. Nothing is "by default", when using freetype (which is what UI toolkits use), you have to care for these.

Not true. Your approach is exactly drawing on the screen.
Please show the exact code you think is drawing "on the screen". Yes, asking for that because there simply is none.

It is a good approach when you have a small amount of data to present to the user. But when there is something more, like billions of records in a table, it simply hit the roof.That's your issue with showing a large amount of emojis. You create a lot of complicated structures (in C meaning), which have a lot of the same data, etc.
Sorry, that's nonsense. There's (in both versions) a static list of emojis, containing the unicode codepoints and the names. Nothing else.

Each of them is made separately and this mean a lot of tasks with all the fun like constant memory allocation and freeing. The second approach creates just one (or several more, but still no duplicated data) that structure and fill it with the data. It is made before the whole drawing process, that's why it is often called off-screen.
Again, the emoji data is static, it's generated at compile time. What's created is UI elements (widgets), which is unavoidable in any case. Also, worrying about dynamic memory allocation is kind of cargo cult, with modern allocators and typical usage patterns, it's not a problem at all. Qt allocates most of its objects dynamically. And even more interesting, for the X11 backend, it uses xcb, and xcb does a dynamic allocation for each and every X event and message (there are LOTS of these) and requires freeing them after handling. No, this isn't a performance bottleneck at all.

I was looking on both, that's why I know what you do wrong there.
Doesn't look like it, honestly.

And if we suggest looking into a code, KDE has an emoji picker, written in QT. ;)
KDE's emoji picker, like most similar projects, has certain requirements, depending on Qt and/or some "input method" protocols and services. The point of mine was to create something sane that works with pure X11. (so it works with as many other X11 applications as possible, no matter how they are implemented)
 
There are two aspects of this.

A "system" should be obvious, intuitive, trivial -- in appearance. [...]
Sure, that's the whole point of (reusable) abstractions. Therefore also agree with everything else you wrote. But still it helps to have at least a rough understanding of the things going on "under the hood" for writing better code. In the example discussed above, it shouldn't surprise you doing this whole dance for "rendering text" around 3.5k times does consume some CPU cycles. You shouldn't have the burden to implement that yourself, which would be "reinventing the wheel" over and over again, but a rough knowledge about how things work gives you a better idea what to try in cases when the resulting performance isn't acceptable.

Again, in the example discussed here, a possible way with Qt could be trying to avoid some of the overhead of QWidget by using the model/view classes and therefore just having more lightweight "items" as your UI elements. As this is already a lot of work and I have doubts it would fully solve the issue here, I decided to do even more work by "getting down and dirty" myself. Reinvent the wheel as a last resort, but only the parts of the wheel you need for your particular purpose, this of course performs better than a fully generic solution. Here I don't need to analyze the texts (I know it's emojis), therefore also don't need to pick fonts based on that text (I know which fonts contain emoji glyphs). I also don't need any client-side rendering (it's specifically designed for X11, so XRender will do). All things Qt of course does, plus lots of other stuff not even mentioned here, because it has to cover any scenario for generic "text rendering".

But even that "extreme" solution aside, I think it always helps to have rough knowledge about how things are implemented that you just want to comfortably reuse. A very simple example might be dotnet's System.String class: If you know it's completely immutable and any "mutator" method gives you a new instance which is, apart from whatever change it does, a full copy of your original instance, you can write better code that will avoid creating and copying millions of strings...
 
Reality just doesn't work that way
Hey man! Reveal to us the true path, the path to absolute reality.

Rust has taught us how propaganda operates. The difference between a memory-unsafe program and a memory-safe one is like a motorcyclist without a helmet or safety gear. Helmets reduce the risk of death by 37% and the risk of head injury by 69%. Hypothetically, anything is possible.

Enjoy Rust and safe ride.
 
Invoking nanny state rules won't help here. We are a helmet free zone. Seatbelts optional.
A relative, who was a ER surgeon, had much praise for those bikers who avoided helmets. The first warm days in spring were good times for kidney patients.
 
My point is you can only create so much of a safety bubble. Humans will continue to act the way we do until the robots take over.

When we get shop machinery it comes with so many guards it is unusable.
Like they created the machine just for regulation sake. Not usability.
Within a week we learn how to overide the safeties. Doors are removed and the machine is able to be used.
So do you want an unusable machine that is safe or a usable machine that is unsafe?
Consult the lawyers.
 
Rust doesn't add a safety helmet. It instead replaces the entire motorbike with a bouncy castle. Unfortunately for many people's use-cases, this is unsatisfactory as any kind of solution.
 
Rust doesn't add a safety helmet. It instead replaces the entire motorbike with a bouncy castle
That could even be fun, if it would not provide the same level of handling and acceleration like the bouncy castle.
 
Hello everyone,

Small update:


I was kindly asked to enhance the rdevd daemon, so I spent some spare time (between the renovation works in my apartment and other activities), working on the code. The following changes have been made so far in the 'master' branch.

* The multi threading tokio mode was removed (Initially, I planned parallel processing). Now, tokio crate runs in single threaded mode. This approach allows to simultaneously broadcast the packet to connected clients and perform parsing and command exec. I did not spent much time on measuring the performance of the code, but it seems it is not slower than original
devd(8)(). The async approach was picked because it reduces the "complications" working with
kqueue(9)() and poll. Even, with small overhead - calling the local scheduler quite often, it is still more convenient because tokio crate does everything.

Since the "tokio" now runs in single threaded mode the memory footprint has decreased:
Code:
   PID COMM             RESOURCE                          VALUE
 7551 rdevd            user time                    00:00:00.008964
 7551 rdevd            system time                  00:00:00.017928
 7551 rdevd            maximum RSS                             8648 KB
 7551 rdevd            integral shared memory                  6804 KB
 7551 rdevd            integral unshared data                    12 KB
 7551 rdevd            integral unshared stack                  384 KB
 7551 rdevd            page reclaims                            673
 7551 rdevd            page faults                                0
 7551 rdevd            swaps                                      0
 7551 rdevd            block reads                                0
 7551 rdevd            block writes                               3
 7551 rdevd            messages sent                            256
 7551 rdevd            messages received                         10
 7551 rdevd            signals received                          10
 7551 rdevd            voluntary context switches               850
 7551 rdevd            involuntary context switches               1


Code:
 devd:
  PID COMM             RESOURCE                          VALUE       
 1778 devd             user time                    00:00:00.007828 
 1778 devd             system time                  00:00:00.014259 
 1778 devd             maximum RSS                             3680 KB
 1778 devd             integral shared memory                    72 KB
 1778 devd             integral unshared data                    16 KB
 1778 devd             integral unshared stack                  256 KB
 1778 devd             page reclaims                             91 
 1778 devd             page faults                                0 
 1778 devd             swaps                                      0 
 1778 devd             block reads                                0 
 1778 devd             block writes                               0 
 1778 devd             messages sent                           1960 
 1778 devd             messages received                          0 
 1778 devd             signals received                           0 
 1778 devd             voluntary context switches               428 
 1778 devd             involuntary context switches               0


* The rdevd daemon's code and utilities code are now two separate binaries which are linked to librdevd (crate). This allows to reduce the repeated code. Now, the rules can be tested using `rdevdctl`. The '-h' flag prints hints for each sub-command.

* A daemon self restart functionality was implemented. Now, by receiving SIG_USR1, daemon should: stop all tokio tasks, exit from async mode (because tokio does not like fork), fork itself, detach from parent and wait until parent quits. Then, it retrieves the exec path from
sysctl(9)()
and calls exec without return. This mode is usefull when it is required to preserve the connections from clients. The daemon will not close the connections. Instead, it will downgrade all connections (UnixSeqpkt and UnixStream) into OwnedFD, remove CLOEXEC flag and serialize this data into the JSON text and store in the environment, so the new process spawned by previous should find the environment variable and restore the FD into UnixSeqpkt or UnixStream and continue. A clients (connected on broadcast) should not notice that rdevd daemon has restarted..

* A syslog-rs crate was updated and received some more features. Some unsafe code was replaced. Rdevd daemon attempts to init the syslog only during async mode (when tokio initialized). A duplicates of sync and async code in base were removed. If daemon's output is tapped, the syslog-rs (Rust) crate is now capable to write into the local file instead of syslog socket. It creates additional thread which prints output to file.

* A devctl message buffer is now allocated using shared-buffer-rs crate. This crate allows to obtain an (single) exclusive access to inner buffer space and shared access to its content in multiple tasks or threads (one access type at a time!). This allows to avoid memcpy when it is required to broadcast the same data to clients and to avoid dealing with Arc'ing the buffer. This crate is lock-less and is build using atomics and operations ordering. If CPU does not support ordering Acquire and Release operations, it will not work. If buffer is of type "drop in place" it will be de-allocated as soon as there are no base references, read or write active references. If, one day a multitasking will be added (parallel processing of events) this crate can manage allocations of buffers. (The crate was taken from dusty shelf and was not properly tested. A race conditions probably may happen in multi-threading mode, but crossbeam queues are built on the same approach and it works).

* Parsing of the devctl messages no longer modifies the message.

* A VarList is now holds the variables payload in Cow<> (copy-on-write) borrows the reference instead of creating new String.

* A config is now wrapped into Arc and RwLock. This allows to modify rules on the air!

* Each rule is stored with unique ID.

* A remote control socket based on SOCK_SEQPACKET was added which allows to perform the following operations:
1) load rule immediately (a rule must be in format was it is in the file, (not tested)
2) remove rule (by ID, ID is in hex format, but I decided to postpone standartization of both load and remove operations)
3) list rules (all, attach, detach, notify, nomatch)
4) display status (daemon write back the same info as for SIG_INFO)
5) restart

Amount of simultaneous users can be controlled though the program's arguments.

* The clients.rs was re-implemented.

* The remote control socket should be enabled using program's argument flags. See '-h' or rdevd_daemon.in. By default it is turned off.

All changes were poorly tested and now the testing is in process by the interested party. The version in the repository is different from the private version, but all important changes are there.

I have not got the idea behind the executing of the actions through the shell, which was ported directly from the original devd i.e /bin/sh -c "<cmd>". Why not to execute the binaries directly? But, I have my own vision and plans, so this is up to community.


Since 14.2, if you want to test rdevd, before placing devd_enabled="NO" you should modify the devd's RC script and comment the line where devctl is disabled in sysctl, otherwise it will not start because it appeared that it is not possible to re-enable devctl.

One known issue is the message about "Waiting 30s for the default route interface" which appears right after rdevd starts. Probably, this is because the native devd starts earlier or there is a wrong rule ordering in vector or whatever.

I am not planning to support the rdevd code officially. While I am still running FreeBSD instances, me personally and some other people interested in this project, but everything changes so fast and dramatically, that I am afraid that there is a possibility that the tomorrow may not even come.
 
Back
Top