Disclaimer

Black Dragon is MY Viewer, i decide which feature i want to add and which to remove, i share this Viewer to show the world that user base size is not important, i do rate quality by effort, thought and love put into the project, not some rough estimated numbers. I consider feature requests only if i you can name proper valid reasons i can agree on. It is my (unpaid) time i'm putting into this project, i'm not here to cater to every Joe's desires.

Wednesday, August 7, 2013

6. August Update

I kept myself busy today with finetuning.
Not much, the outcome doesnt really make me happy.

I'll start some theoretical tech talk why i'm very limited here.


                                                    Warning: alot assumptions here!                                                   
As far as i've seen there are several ways to make shadows work, ofcourse one more performance eating than the other, most games seem to use a fixed, often very high resolution shadow map, how can they manage to have such nice shadows without totally overkilling your performance? Well... Second Life has no boundaries, our camera is free, the world is free, we can move and look whereever we want as close as we want, other games often don't allow such a huge amount of freedom for a reason... what if we were able to look very close at our shadows? at our textures? at everything? we would find out all the tricks and hacks developers use to make our games look good without killing your performance, one of these tricks is the shadow map, shadows can be a performance eater, especially if rendered in realtime... did you ever notice that Second Life renders all these in realtime? projected lights, realtime...sun shadows, realtime... games often use "baked" shadows or very very low resolution shadows for let's say interior scenes, shadows from the sun are baked into the textures or map and never change or move or do anything at all, only "projected" lights from light sources use realtime calculation because it saves alot performance. Second Life does everything other games possibly do as a pre-baked texture, in realtime resulting in a huge performance drop which is probably one of the many reasons Second Life doesnt currently allow more than 2 shadows from projected lights.

What else do games do different with shadows? well, the biggest shadow which in this case is the always, everywhere-existing sunshadow is often a pre-baked texture leaving alot memory und processing power for the other shadows, allowing them to be rendered at a way higher resolution.

Can't we just turn up the shadow resolution? we could, we have to, to counter pixelation caused by our high draw distance but that again will decrease performance drastically... but why do we have to turn up the resolution with higher draw distances? why does the draw distance decrease our shadow resolution at all? It doesn't!

Again, this is alot of assuming now, i don't know how it exactly works but i don't like the theoretical code stuff anyway, i go for what i really see and what i see is:

We have a shadow map. This shadow map has a given resolution, let's say the same resolution as our rendering resolution. 1920x1080. Imagine a prim with a 1920x1080 picture on it.


This 1920x1080 picture is our shadow map, we put it on a 1.920m x 1.080m prim, perfect size for this resolution, it's not too big and not too small, you can see all details and the picture is super sharp. This prim is our draw distance of 64m. What happens if we resize this prim to 2x size? right! the texture will (if we ticked the option for it) resize with it, our texture will become stretch over the new size of the prim, let's say 3.840m x 2.160m, double the size than before, but our picture is still only 1920x1080 resulting in it becoming "stretched" or pixelated on closer (or same zoom level) investigation. Same for our draw distance, if we pull our draw distance up to 128m our shadow map will still stay at 1920x1080 resolution but has to be layered on a surface that is twice as big as before, resulting in shadows becoming more pixelated. We could resize our prim (draw distance) to four, five, ten or twenty times its original size, our picture (shadow map) will always stay the same resolution but has to be layered on a surface becoming bigger and even bigger. Ever stood in front of a huge 64m flat surface that has a 1024x1024 texture on it? its SUPER blurry (pixelated) if you look at it from a normal angle and zoom level, to make it look good again you would need to zoom far out. Exactly the same is happening with out shadows.


Basically we would need to double the resolution of our shadows for each time we double our draw distance, that means 64m = 1.0x , 128m = 2.0x , 256m = 4.0x , where 2 times is often already the moment at which most GPU's will say fuck you and bail out or only produce single digit framerates even on low draw distances.

So what can we do? nothing. Absolutely nothing. I remember the shadows in Viewer 1 working the other way, where the shadow map resolution scaled (probably) exponential with your draw distance, resulting in them hardly changing their quality but decreasing framerate drastically with an alarming speed. Heres a picture to show you how that would basically look like, note that those textures are web-media faces and are snapshots i took, thats why they dont scale that good, you might not see it but the texture repeats vertically too.


I hope you understood at least a little bit of what i was trying to explain...
anyway, lemme know if i should implement a feature that automatically scales your shadow resolution with your draw distance in the future.

That's it for today!
Niran.

8 comments:

  1. All i know is that v1 viewers shadows on my rig makes my fps drop like shit, now i know why, as i used always 1024 m draw distance on them (imprudence and phoenix!).
    I love the shadows like i use on Niran's viewer, as i can really use them at all times without a loss in fps, but if the intent of black dragon is to give all, even those who only come in world to take pics the best graphics despite performance i would say, make them scalable but as an option if possible!

    ReplyDelete
  2. yes please and thxs btw your awesome :D Keep up the good work.

    ReplyDelete
  3. Some Important missing link info here Niran... MEA CULPA on this one, I think. Sorry.

    The Lindens appear to have incorporated a technique I discovered, which makes use of GPU acceleration and PhysX interpretation, and SLI, and flow control, which Niki incorporated into the Firestorm viewer, which basically lets you simulate the true physical properties of the entire energy spectrum, and matter, on any 2D or 3D canvas.

    this cut and paste is a mess, but it has the relevant info...

    https://docs.google.com/document/d/1yLwq7aWOAQl0vacCvOsyfZ00_pi3rxm2UjiqjAV8ZAs/edit

    I think they've had to temporarily fast patch shadow rendering because something very unexpected was happening... avatar body shadows were demonstrating the de-interlace separation horizontal lines from top to bottom, giving the shadow the appearance of being underwater, and hair was appearing as if it was underwater too.

    The GPU was interpreting and rendering the water content it reasoned was in the flesh, and hair, as if it were liquid and not a pressurized semi-solid mass.

    I'm sure they'll find a fix soon enough. I guess they looked at it and decided it was too good to pass up, and worth a short term patch over, server side.

    It literally hydrates, and properly simulates the properties of water, when you use the technique with flow control.

    And the light is as real as it's gonna get too, in terms of how it's expressed... the first night I stumbled onto this, I literally sunburned my hands, quite alarmingly.

    You can simulate IR, UV, Sunlight, prismatic component colors of white light, and matter, if you know the correct balance of block color and greyscale equivalents to use, based on either the observable translucent properties of that matter, or on the colors and greyscale which appear when that matter is burned/oxidized.

    The instructions in that link above are also flawed, badly.

    I forgot to include the actual light component into the process, lol.

    So you basically want to either halve the values to 6.15% and 6.2% instead of 12.3 and 12.4%, and use white light or yellow sunlight for the other half of the equation... like a top layer of 50% value white light or yellow light, set to multiply or hard light.

    Whats significant here, and of true value, is the ability, eventually, to virtualize entire fabricated hardware systems with all the material components having their respective material properties expressed in the build.

    Think natural blue diamond, not-duplicable-in-synthetics-at-this time crystalline matrix boron peak present, active, and intact, and capable of fluorescence, and thus acting as a semiconductor, without having to invade the Crimean Congo region for the diamonds.

    Ditto for all the rare earths.

    Give them time guys. They'll work it out. And this is too important a thing to push aside over minor short term cosmetic issues.

    It was important enough, in my mind, to warrant it being given to everyone, as public domain, and building in the obviousness argument to prevent patent trolling, so that no one party could claim the rights to it.

    I gave it to Argonne and Ghuangzou at the same time, and then to the general public the next day.

    So I apologize for FUBARing your shadows with my water, lol.

    But believe me, in the long run, it will have been worth the short term pain in the butt.

    Niran, I have some other info for you about CHUI, and more graphics info, which we should talk about. Please email me. I left my email in your SL IM's.

    peace

    ReplyDelete
  4. ah ok..... healingshoes@gmail.com

    we need to talk about chui having brian shuster's eye tracking, and how we may be able to incorporate it for non-scripted agent human users, and use it to get proper eyelines for video and photography.

    there's a LOT of stuff in chui people don't know =)

    ReplyDelete
  5. ask niki for the color textures and file if u need them. niki dasmijn from firestorm. I gave her the full set at the same time as I sent it to LL

    ReplyDelete
  6. oh, taking note of your opening paragraph about the realtime render, yes, you're correct.

    but all of that doesn't even rate as a mild burp for their server, trust me on that, lol.

    they run, at worst, at Oak Ridge Titan speeds. Yes, seriously.

    Go to the Argonne labs at Lawrence Livermore website, ANL.gov and check out the computing info and side by side comparisons.

    Blue Gene P's (Gene's) challenge, from 2003, was a 4D torus plus a tree.

    Blue Gene Q's (GQ's) challenge was a 5D torus plus a tree.


    OK... your texture complaints...

    You're wrong about one thing at a base level of your premise... the way that textures are applied to a render and composited is always incremental, based on fixed values which are used to make the compositing and demuxing process much simpler.



    64, 128, 256, 512,1024, 2048, 4096, etc.

    so a 1920x1080 would become a 1024x1024... that's what's fuckin ya.

    and it's why I argued strongly, and pouted, and stomped my feet an held my breath trying to get them to implement the Titan GPU SLI standard size of 2560x1600. It gives us back our LOD and aspect ratio for the textures.

    So this is one issue you have here.

    The other one, the non-tracking LOD thing, part of it is my crappy flawed light forcing a temporary patching thingy.

    The other part of the problem is their formerly super duper top secret classified very very ultra top secret spy satellite double secret probation omg so secret image reconstitution algorithms, which let them zoom in on any spot on any picture, and reconstitute the details which should be there, no matter how blurred and crappy the original may be.

    SOunds an awful lot like the tech we used to hear about in whispers during the Cold War, doesn't it?

    The tech that US Spy Satellites used to zoom in from outer space and read the headlines on a newspaper on a Russian naval vessel.

    That's because it is the same tech, lol.

    And they've declassified it, and downstreamed it for us.

    You can get it via Smith Micro, the Poser people. It's called PHOTO ZOOM.

    And OMG does it work well.

    OK... so fixing your grass is a two step process. Same thing I did with my skins more or less.

    We want a higher resolution, but not with all the crappy blur, right?

    Which means we need photo zoom to enhance the image and make it look good when we make the texture bigger.

    So you take it up to say 4096.

    But now you have another issue.... at that size, you can see very clearly that the minute details aren't minute enough, aren't fine enough.

    For me it was the skin pores being way too big...

    SO I used the S-Spline XL zoom setting, and checked the Unsharp Mask box to activate unsharp masking.

    Then I set Radius to 0.2 (or 0.02 I forget) for the head texture and 0.1 (or 0.01) for the body and legs.... I did this because of the surface area of the head in the in-world render, vs the surface area of the legs and body.

    VOila... organic grain now at right proportions. Bing, Bang, Boom.

    Done.

    You probably only need a couple repeats if you do it at that LOD.

    I've found Filter Forge to be very, very handy also.

    But it has some quirks. It leaves fingerprints, so to speak.

    ReplyDelete
  7. oh, taking note of your opening paragraph about the realtime render, yes, you're correct.

    but all of that doesn't even rate as a mild burp for their server, trust me on that, lol.

    they run, at worst, at Oak Ridge Titan speeds. Yes, seriously.

    Go to the Argonne labs at Lawrence Livermore website, ANL.gov and check out the computing info and side by side comparisons.

    Blue Gene P's (Gene's) challenge, from 2003, was a 4D torus plus a tree.

    Blue Gene Q's (GQ's) challenge was a 5D torus plus a tree.


    OK... your texture complaints...

    You're wrong about one thing at a base level of your premise... the way that textures are applied to a render and composited is always incremental, based on fixed values which are used to make the compositing and demuxing process much simpler.



    64, 128, 256, 512,1024, 2048, 4096, etc.

    so a 1920x1080 would become a 1024x1024... that's what's fuckin ya.

    and it's why I argued strongly, and pouted, and stomped my feet an held my breath trying to get them to implement the Titan GPU SLI standard size of 2560x1600. It gives us back our LOD and aspect ratio for the textures.

    So this is one issue you have here.

    The other one, the non-tracking LOD thing, part of it is my crappy flawed light forcing a temporary patching thingy.

    The other part of the problem is their formerly super duper top secret classified very very ultra top secret spy satellite double secret probation omg so secret image reconstitution algorithms, which let them zoom in on any spot on any picture, and reconstitute the details which should be there, no matter how blurred and crappy the original may be.

    SOunds an awful lot like the tech we used to hear about in whispers during the Cold War, doesn't it?

    The tech that US Spy Satellites used to zoom in from outer space and read the headlines on a newspaper on a Russian naval vessel.

    That's because it is the same tech, lol.

    And they've declassified it, and downstreamed it for us.

    You can get it via Smith Micro, the Poser people. It's called PHOTO ZOOM.

    And OMG does it work well.

    OK... so fixing your grass is a two step process. Same thing I did with my skins more or less.

    We want a higher resolution, but not with all the crappy blur, right?

    Which means we need photo zoom to enhance the image and make it look good when we make the texture bigger.

    So you take it up to say 4096.

    But now you have another issue.... at that size, you can see very clearly that the minute details aren't minute enough, aren't fine enough.

    For me it was the skin pores being way too big...

    SO I used the S-Spline XL zoom setting, and checked the Unsharp Mask box to activate unsharp masking.

    Then I set Radius to 0.2 (or 0.02 I forget) for the head texture and 0.1 (or 0.01) for the body and legs.... I did this because of the surface area of the head in the in-world render, vs the surface area of the legs and body.

    VOila... organic grain now at right proportions. Bing, Bang, Boom.

    Done.

    You probably only need a couple repeats if you do it at that LOD.

    I've found Filter Forge to be very, very handy also.

    But it has some quirks. It leaves fingerprints, so to speak.

    ReplyDelete

  8. There may be another issue with the shadows too, which I just realized...

    The shadow quality and density may be determined by the amount of available dimensional data for any given object. In fact, thinking about it some more, I'm positive this will be the case.

    To GQ, a "complete" dimensional variable set needs the following...

    Specular Map
    Normal Map
    Bump Map (positive ie outward bumping topography)
    DIsplacement Map (negative ie inward concave topography)
    Ambient Occlusion Map
    Diffusion Layer
    Subsurface or Subsurface Scattering layer
    Reflection layer
    Alpha Layer
    Makeup layer
    UV Map


    So 11 subsets of data plus a texture makes 12 dimensions, plus time... and u get the 13 dimensional M theory brane.

    That's a complete data set to Gene, Mira or GQ. I think. Educated guess but I'm fairly sure it's correct or very close to correct.

    So if some of the data is not present, there is going to be some degradation, a reduction in overall coherence of the image AND its shadow, because the shadow is a spectral artifact of the base object. Phantom object, phantomish shadow.

    My suggestion is to start compositing all your textures into one multi layered clump.

    Don't worry, GQ can tear it apart in less than a blink of an eye, and separate all the layers, see what's on each layer, and apply it accordingly.

    makes life easier if you punch holes in layers as you go though, in one corner, just to make sure a little tiny piece of each layer can be seen from the surface layer.

    trust me on this... you CAN roll everything into a single composite, if you really want to. the system will know what to do with it.

    oh, you can very liberally apply motion blur or Gaussian blur to your textures too, and unless its really extreme, the render won't come out blurry. I tested this already on some feet.

    and if you do it right, angle the blur correctly, in the way the joint should move IRL, and take the time to do each knuckle of each toe, the render engine will take note of that too, and bring your toes to life....

    Literally.

    They'll wiggle and curl on their own, without an AO.

    It blew me away, lol.

    I went to test a theory with motion blur, put the texture on my mesh feet, and static mesh feet with NO TOE BONES whatsoever began to wiggle, fidget, and curl before my very eyes.

    That's Niki's awesome sauce flow control making love to the GPU render engine, while I play footsie with both of them, lol.

    OK. so that's a big bag of goodies for y'all to play with, lol

    have fun

    =)

    ReplyDelete