Author Topic: How hard could smoothing be?  (Read 37297 times)

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #30 on: November 22, 2014, 03:53:07 pm »
Did some research on implementing glCallLists in my stuff; and learned they have been depricated... in favor of vertex buffer objects... which kind of makes sense, since you can create vertex buffers in threads asynchronous to the display... where glCallLists have to be built on the GL thread context...

so shaders and display buffer building as another pluggable renderer :)

I find that the resonable relation would be building buffers per shader per image source... or per block type; with shaders would think the 'default' blocks could be computation shaders based on a distance from poly edge and a color scalar...

and those built per voxel sector ....

there's also no multi-indexed buffers... like normals in this world I'd only need a buffer of 6 points, if I could have a index buffer for texture, index vertex and index normal independantly... but indirect uses the same index map for vertex/texture and normal so they all have to be the same length...

http://stackoverflow.com/questions/11148567/rendering-meshes-with-multiple-indices for instance indicates it can be done, but it's more work than benefit.


olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #31 on: November 24, 2014, 01:24:11 am »
First, news on some developpement :

We have implemented the memory pooling for voxel extensions (published some days ago on Github). We implemented it in a way you won't have to change your code.

Also, we'll add your good optimisation for water voxels.

Did some research on implementing glCallLists in my stuff; and learned they have been depricated... in favor of vertex buffer objects... which kind of makes sense, since you can create vertex buffers in threads asynchronous to the display... where glCallLists have to be built on the GL thread context...

Keep in mind that these API where designed mainly for vector games made mostly of big mesh objects.

So, VBO are better with mesh of traditionnal games than with voxels rendering(at least with a simple approach).

And display lists have the advantage of better compatibility. This stuff run nearly everywhere.

Yes, that's deprecated like all old OpenGL stuff. Not a problem in practice.

That said, we think it is possible to do better and faster rendering using some advanced OpenGL technics.

That's simply a matter of time and priority to do it.

Quote
so shaders and display buffer building as another pluggable renderer :)

In Blackvoxel, Shaders wasn't used because the "old computer looking" rendering style we wanted simply didn't needed shaders.

But as we did the textures for the game, we also realized that it could be nice to have some effects like bump mapping.

And other kinds of light effects could make some interesting and original things.

So, that's an open way for the future.

Quote
I find that the resonable relation would be building buffers per shader per image source... or per block type; with shaders would think the 'default' blocks could be computation shaders based on a distance from poly edge and a color scalar...

and those built per voxel sector ....

A "simple" VBO approach with voxel have a big drawback : too much draw calls.

The challenge for really gaining performances with VBO is to reduce the number of draw calls.

VBO were obviously designed to render big static meshes and weren't designed with voxel in mind.

But some OpenGL technics could be used to reduce the number of draw calls. The problem will be to ensure that the implementation would work with most machines.

So there is ways to explore.

We must keep in mind that there is interest to make a new renderer only if there is real gain.

Quote
there's also no multi-indexed buffers... like normals in this world I'd only need a buffer of 6 points, if I could have a index buffer for texture, index vertex and index normal independantly... but indirect uses the same index map for vertex/texture and normal so they all have to be the same length...

http://stackoverflow.com/questions/11148567/rendering-meshes-with-multiple-indices for instance indicates it can be done, but it's more work than benefit.

Yes, as voxel are really particular cases, that's never simple to figure out what could be good or bad without testing.

That can become "funny" when you find that some technics work well with some GPU and less with others. And it could also depend on the operating system.

As an example, the actual renderer is faster with comparable "GPU power" with nVidia and Intel than with ATI/AMD.

The Blackvoxel Team