Daz LOD System - has someone tried this product? (CLOSED)

https://www.renderhub.com/lauwurence/daz-lod-system
This looks potentially interesting, would love to hear from people who have tried it!
I watched the promo video, it looks like it's kind of a one-click solution and very easy to use. But at the end it just kind of fades to black and I don't understand what's supposed to be happening there. (Also, that was infinty% more pepe than I needed in my life.)
If anyone has tried this product, would love to know things like
- User experience reports. Does it work well?
- what exactly does it do? Yes I read the promo text, but I'd love to hear about the nittiy gritty.
- can it do things that V3Digitimes' trusty old Scene Optimiser can't do?
This looks potentially interesting, would love to hear from people who have tried it!
I watched the promo video, it looks like it's kind of a one-click solution and very easy to use. But at the end it just kind of fades to black and I don't understand what's supposed to be happening there. (Also, that was infinty% more pepe than I needed in my life.)
If anyone has tried this product, would love to know things like
- User experience reports. Does it work well?
- what exactly does it do? Yes I read the promo text, but I'd love to hear about the nittiy gritty.
- can it do things that V3Digitimes' trusty old Scene Optimiser can't do?
! REPORT
Thanks, RenderHub Support, for notifying me about this thread!
The promo video didn't fit into 100MB so I had to crop it. The full video can be found on YouTube: https://youtu.be/EXDaagYtTiE?si=paZZtLzCnAS0GGw7&t=175
(couldn't actually add any links in the description).
The script has 6 sales on Gumroad and since it has no sales here, I'll try to explain in detail what exactly the script does. I apologize in advance for my English, I couldn't upload the script on the Daz Store because I'm from Russia.
Yes, it's a one-click solution. When you press the Update LODs button, it starts to collect objects data. It calculates size and distance from active camera to object surface bounding boxes (not to the object bounding box itself, but to each surface, because it makes much more sense). It also gets the render resolution, the texture path and the value (RGB or a single float value). Then it stores everything into data.json and runs the Python code that is a built .exe file. The script weights 60MB because it contains Python and several libraries, so you don't have to install everything by hand.
On the Python side the following happens. Using regex and removing suffixes, it tries to find original images (zero LOD) from filenames provided and uses that as a starting point. Then it creates links from the images to objects to make sure that only one image LOD will be created and applied. Then the script can hide too small objects, if you need it to, but will ignore objects with emissive surfaces, because we don't want to hide light sources. After that the magic starts to happen. The script starts to create LODs using all the power of your CPU with multhithreading (basically this's why the script can be 50 times faster than V3D Scene Optimizer depending on your CPU speed and cores).
The process of creating LODs is interesting. It opens each image using cv2, goes through each pixel with numpy, gets minimum and maximum and calculates the average value. The result is a delta factor (0.0-1.0) that tells us about of how many useful data does the image contain. If it's 0, then the image is considered to be solid and will be fully removed from the scene and replaced with its color value or float value multiplied by value that is set in the property. So instead of using a solid image of black color we will have color set to [0, 0, 0]. But there's another case where the script would want to temporarily hide an image from the scene. It happens when the object is too far away and doesn't really contribute. Then the script gets the image's average color, multiplies it by the property value, removes the image and applies the value. In this case the image path is being stored in the object itself. You will find a "Dict Property" property that is basically a string converted from a json dictionary that contains all the hidden textures and original property values. Basically, that's why I consider the script to have non-destructive workflow, as you don't store your data outside the project.
But how does the script create images? When it comes to saving textures, it uses the Pillow library which is capable of reading/writing icc_profiles and compressing images. The bigger the LOD, the lower the resolution and higher compression is used, because we don't really want to fill our content library with too heavy images. Each LOD is 2 times smaller than the previous LOD. You can also set the minimum texture resolution (256 by default), and then the script will clamp the image size when it's needed, taking into account the image ratio. When the image is created, it stores the data (filenames, values) into the dictionary and compares new data to the old data, making sure that only useful changes will be passed to Daz.
Everything on the Python side happens within 0.1-5 seconds! The only issue is that Daz slows down everything else.
When the Python script ends his job, the Daz script reads the data.json and applies all the changes (applies/removes textures, sets/stores values and creates/deletes Dict Properties). When all the work is done, the VRAM calculator starts reading the resolution of each image and displays the VRAM usage. If you have Texture Compression enabled, the actual value can be different.
So, does the script work well? I'm an AVN developer and I created a huge scene for my project that I tried to optimizer with the V3D Scene Optimizer. I showed it on the video on YouTube and the whole "optimization" process could take me 20 minutes, but I gave up. Basically, that's why I started writing the script for my personal use and then decided to share it with my friends who are also AVN developers. With them we fixed a few bugs and right now I don't see any new bug reports.
"Can it do things that V3Digitimes' trusty old Scene Optimiser can't do?"
Let's change the question. Can V3D Scene Optimizer even be considered a scene optimizer? To me, it's like an old machine that makes noise, shakes and lets off steam, but does its best to show that it is very busy. To be honest, V3D SO is only capable of 10% of what the Daz LOD System does. And I'm not sure if I named my script correctly as it does much more than just changing the resolution of images. Hmm I could split the functionality, sell everything as different scripts and become rich
But I'm bad at marketing.
Anyway, I'm planning on adding one more feature into the script. It'll the ability to bake bump maps into existing normal maps. So instead of having 2 maps that do pretty much the same thing, we will have 1 singe normal map.
Speaking of updates, I have a Discord channel where everyone who bought the script can share their bug reports, see the to-do list or download latest updates: https://discord.com/invite/z3qdxYhrB6
Here's a happy pepe who shrunk the scene texture VRAM usage from 12GB to 1GB. Not my screenshots!

The promo video didn't fit into 100MB so I had to crop it. The full video can be found on YouTube: https://youtu.be/EXDaagYtTiE?si=paZZtLzCnAS0GGw7&t=175
(couldn't actually add any links in the description).
The script has 6 sales on Gumroad and since it has no sales here, I'll try to explain in detail what exactly the script does. I apologize in advance for my English, I couldn't upload the script on the Daz Store because I'm from Russia.
Yes, it's a one-click solution. When you press the Update LODs button, it starts to collect objects data. It calculates size and distance from active camera to object surface bounding boxes (not to the object bounding box itself, but to each surface, because it makes much more sense). It also gets the render resolution, the texture path and the value (RGB or a single float value). Then it stores everything into data.json and runs the Python code that is a built .exe file. The script weights 60MB because it contains Python and several libraries, so you don't have to install everything by hand.
On the Python side the following happens. Using regex and removing suffixes, it tries to find original images (zero LOD) from filenames provided and uses that as a starting point. Then it creates links from the images to objects to make sure that only one image LOD will be created and applied. Then the script can hide too small objects, if you need it to, but will ignore objects with emissive surfaces, because we don't want to hide light sources. After that the magic starts to happen. The script starts to create LODs using all the power of your CPU with multhithreading (basically this's why the script can be 50 times faster than V3D Scene Optimizer depending on your CPU speed and cores).
The process of creating LODs is interesting. It opens each image using cv2, goes through each pixel with numpy, gets minimum and maximum and calculates the average value. The result is a delta factor (0.0-1.0) that tells us about of how many useful data does the image contain. If it's 0, then the image is considered to be solid and will be fully removed from the scene and replaced with its color value or float value multiplied by value that is set in the property. So instead of using a solid image of black color we will have color set to [0, 0, 0]. But there's another case where the script would want to temporarily hide an image from the scene. It happens when the object is too far away and doesn't really contribute. Then the script gets the image's average color, multiplies it by the property value, removes the image and applies the value. In this case the image path is being stored in the object itself. You will find a "Dict Property" property that is basically a string converted from a json dictionary that contains all the hidden textures and original property values. Basically, that's why I consider the script to have non-destructive workflow, as you don't store your data outside the project.
But how does the script create images? When it comes to saving textures, it uses the Pillow library which is capable of reading/writing icc_profiles and compressing images. The bigger the LOD, the lower the resolution and higher compression is used, because we don't really want to fill our content library with too heavy images. Each LOD is 2 times smaller than the previous LOD. You can also set the minimum texture resolution (256 by default), and then the script will clamp the image size when it's needed, taking into account the image ratio. When the image is created, it stores the data (filenames, values) into the dictionary and compares new data to the old data, making sure that only useful changes will be passed to Daz.
Everything on the Python side happens within 0.1-5 seconds! The only issue is that Daz slows down everything else.
When the Python script ends his job, the Daz script reads the data.json and applies all the changes (applies/removes textures, sets/stores values and creates/deletes Dict Properties). When all the work is done, the VRAM calculator starts reading the resolution of each image and displays the VRAM usage. If you have Texture Compression enabled, the actual value can be different.
So, does the script work well? I'm an AVN developer and I created a huge scene for my project that I tried to optimizer with the V3D Scene Optimizer. I showed it on the video on YouTube and the whole "optimization" process could take me 20 minutes, but I gave up. Basically, that's why I started writing the script for my personal use and then decided to share it with my friends who are also AVN developers. With them we fixed a few bugs and right now I don't see any new bug reports.
"Can it do things that V3Digitimes' trusty old Scene Optimiser can't do?"
Let's change the question. Can V3D Scene Optimizer even be considered a scene optimizer? To me, it's like an old machine that makes noise, shakes and lets off steam, but does its best to show that it is very busy. To be honest, V3D SO is only capable of 10% of what the Daz LOD System does. And I'm not sure if I named my script correctly as it does much more than just changing the resolution of images. Hmm I could split the functionality, sell everything as different scripts and become rich

Anyway, I'm planning on adding one more feature into the script. It'll the ability to bake bump maps into existing normal maps. So instead of having 2 maps that do pretty much the same thing, we will have 1 singe normal map.
Speaking of updates, I have a Discord channel where everyone who bought the script can share their bug reports, see the to-do list or download latest updates: https://discord.com/invite/z3qdxYhrB6
Here's a happy pepe who shrunk the scene texture VRAM usage from 12GB to 1GB. Not my screenshots!

! REPORT
bonj
Karma: 12,045
Fri, Mar 15, 2024Interesting product.
How does it deal with Normal and displacement maps? V3Digitimes seems to dump them altogether and that is no good for myself.
Does your script make use of normal and displacement by compressing them also?
How does it deal with Normal and displacement maps? V3Digitimes seems to dump them altogether and that is no good for myself.
Does your script make use of normal and displacement by compressing them also?
LaUwUrence
Karma: 219
Sat, Mar 16, 2024The script affects all the textures it can find but treats them differently. Yes, it will reduce their resolution, but they will be hidden at some distance if the script decides that they do not make an impact.
Script looks super interesting and useful, unfortunately 40 bucks is a bit out of my reach at the moment. Have put it in my wishlist so I don't forget about it.
! REPORT
This does look really cool. I tend not to work with huge scenes but I'll definitely keep this in mind when the next huge scope project drops on my desk.
"the image is considered to be solid and will be fully removed" - This makes me happy!
Nothing aggravates me more than seeing a raft of individually named textures in one product that are just black metallicity images. They all have the same vram overhead when converted by iray. It is pure laziness on the part of the producer.
"the image is considered to be solid and will be fully removed" - This makes me happy!
Nothing aggravates me more than seeing a raft of individually named textures in one product that are just black metallicity images. They all have the same vram overhead when converted by iray. It is pure laziness on the part of the producer.
! REPORT