Categories

Ultimate Forward Ratracing

☆☆☆☆☆ No ratings yet
there are apparenly forward ray tracers out there, but i can't find them. only a few individuals here and there with (probably unfinished) private projects. project is to make a release version, open source, forward raytracer should have the same primitives, unions etc. that povray has. should use an octree, when the scene is analyzed its boundaries are computed down to the voxel. when traversing space use shortcuts to navigate the octree as most subsequent points will not bifurcate all the way to the top of the octree. assembly language might greatly speed things up for this. can we pre-compute and store a gradient at each voxel, or does it matter where on the voxel the ray hits? do we need to make very large voxels to be able to store the entire scene and forgo precomputing gradients? if we can use voxel space with precalculated gradients (maybe this can be an option, with accompanying coordinate restraints?), then we can support mathematically defined shapes for which there are no derivatives. support volume rendering? we can define the outer voxels of a volume as boundaries in between which a spline must travel and precompute the optimum spline for every latitudinal and longitudinal perimeter, possibly storing the gradients. can define multiple cameras in the same scene with no less of rendering time. this can even be used for moving around. don't just have an r channel, a b channel and a g channel have 1024 different wavelengths and calculate r g b at the end . each light source and object can have its own spectral envelope. include envelopes for common materials and light sources as well as allow specification of a blackbody temperature. 1024 is selected because that would allow a smooth hue gradient to span a 1024 by 768 pixel display. (but if the eye can distinguish 256 shades, how many hues can it distinguish?) phosphorescence bbo crystals semi-reflective glossiness diffraction relativistic time warp and/or gravity bend? we should provide algorithms for creating random dirt/flaws/imperfections in surfaces for more realism. several algorithms for different types of surfaces. support multiple cores and also distributed computing. rgb images can simply be *summed up* at any time before presentation. while forward ray tracing is slow, this might be desirable for special projects, and also computing speed will inevitably increase. things that would be hard to support, or could only be supported ad hoc: structured color: tyndall effect (e.g. fire opal) <- unordered diffraction grating <- ordered iridescence <- means any particularly ordered and changeable structured color depends on spacing pattern, e.g: peacock feathers soap bubbles mother of pearl butterfly wings beetle shells other issues: polarization (not necessarily supported, but not necessarily ad hoc) to render a continuous path through a scene all at the same time, define a 3d shape which is every point the lens passes through, then record not only the wavelengths of the photons that hit it but what directions they come from. from this data the video can be constructed. obviously, multiple paths through the scene could also be rendered at the same time. when something is in motion as the camera is passing through, sometimes it could be separated from the rest of teh scene (as if it doesnt affect it) and the majority of the rendering can still be done all-at-once. lighting might be off if the rest of the scene doesnt affect it, this can be cured by taking a snapshot of the rest of the scene's lighting effect on it without continuously rendering the rest of the scene. (maybe the only utility here is to aim photons only toward the isolated object in our isolated rendering session)