Author Topic: How hard could smoothing be?  (Read 35296 times)

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #15 on: October 27, 2014, 11:53:50 pm »
so... entites will always just be a single voxel then?  maybe 2 if it's aplayer ? so no making like a large animated dragon/samauri ICE? (intrusion countermeasure electronics from Shadowrun/Cyberpunk matrix)... hmm hard to show what I mean it's just a image in mind, and a rough search returns 0 results that are even similar... something like http://superdupergc.com/?p=221  ... but I guess other models don't have to be voxel based...  kinda the inverse of robocraft, where landscape is just a mesh, and the entities are all voxel...

Yes you can animate large block structures in Blackvoxel  :). A single MVI active voxel can animate a lot of voxels.

But, yes, that's not similar to games having entities whether these are vector or voxel entities.

The problem with entities is that they can't interact with the world the same way the voxel does.

So, that's a particular design choice in Blackvoxel to replace entities by true voxels.

That's what we mean by "pure voxel paradigm".

Of course, that choice mean Blackvoxel can't do what a lot of voxel gamesdoes with entities.

But it also mean we can do some things other games can't.

That's what we can call a choice   ;).

I take this opportunity to say that Black Ice get out a long time after Blackvoxel.

Quote
I'm not attempting to make it be something else... I'm just noting where others have already made some interesting mods.... and some of them would really be fairly easy to accomplish.

Was actually heading more towards making a micro scale world to represent the neural networks... which would be within a robot cube for instance. ... and although it doesn't really need a orientation, it would be useful to add an offset-origin to it; and might as well just add orientation while at it.

Why not creating an abstraction with an "ai" robot that will scan a portion of world where you create your design and copy it internally for execution?

Quote
You keep speaking of this 'road map' but I didn't see it.... may be I didn't look hard enough, but in the early beginning was looking for such information...

So, lets explain what's is our vision and roadmap.

Blackvoxel started in 2010. Engine and all parts around take a big amountof time. After two years, we froze engine capabilities, developed content and stabilized the whole stuff.

Since Blackvoxel has enough content to be played, we completely got out of alpha and beta state and entered in full stable state. Blackvoxel is now stable since a while.

This state won't mean development stopped, but rather that stability became a strong priority over new engine features. When users have some issues, we try to do our best to quickly fix them. The more stable it is, the less time it take to make support over developing time.

Now, our roadmap is focused on game content, educational materials and other kind of stuff around the game. Game contents like new voxels, automatics, robots, zones, etc, are now the main priority of the project. Doing things differently would be very difficult.

At this time, making engine evolution is not a priority: it's working very well. It's stable, it's fast. And there is now a lot to do with stuff it does. (We know there is some cleaning to do in it. We'll try to do our best in the time.)

So we'll try to minimize changes on it to keep stability, but also for best usage of our limited resources: it would be nonsense to have a wonderful engine with no game and no content.

That won't mean that we'll do no evolutions on the engine. But rather that evolution will be well planned in long term evolutions with  careful changes to avoid stability problems.

Of course we can decide to change this if one day we found it's interest of the project.

But for now, there is a lot of things to do with what we have already.

Quote
I also played with 'Seed of Andromeda' which has a good approach for modelling full dynamic planets, also with a sub-meter voxel.... They have no smoothing, and I inquired, and they said 'it's an artistic choice we make to set ourselves apart' ... looks more to me like it's a choice that makes them like everyone else... it's more rare to have smoothing than to have minecraft cube model.... but they do have cute explosions when voxels are removed... with persistant particles such that they can be picked up later... or you can break a dirt block and get gems out or something based on sub-portions of the material being able to be different result types.  They also didn't think neural networks would be within their scope either... even within the scope of supporting as a external Mod; even though the creator's like 'allow modders to do anything they want'... *shrug* Or better to have dynamic choice on whether a voxel is smoothed (can use a single bit in the 16 for
VoxelType for instance, and still leave 32000 types of cubes ) ... and iterating up the relations, smoothing per voxel-sector, smoothing per world... (landscape and constructions based on top of it being separate worlds, within the same universe).

In fact, voxels are very old concept. We found early kinds of voxel game at the early 80. Even smoothed voxel are old : marching cube patent was published in 1985. If you want to see a Minecraft looking game that have more than 15 years old, TurboLode, it's here.
Voxel game where 3D evolutions of 2D tiled games. But the vector game fashion lead to forgot them for a long time. That wasn't the only reason: voxels are memory hungry.
But with modern hardware and it's huge memory along with vector game boring, that was only a question of time for voxel game to come back.

We can understand why some creators prefer to use non smoothed voxels.

Smoothed voxels can get more visual friendly on engine that are focused on this.

In other hand, non smoothed voxels are far better for manipulation because they are clearly distinctive.

So what will be choosed in a game could depend on what it's creator want to do.

There is a lot of original things to do, even with unsmoothed voxels. The concept is far less used than vector game concepts or 2D game "revivals" a lot of creator are using.

Quote
Could resolve collisions by using a robust physics library like Bullet,ODE, Newton, etc...

I'm afraid with voxels things are never as simple as gluing pieces of code.

I mean, that could be simple to say and to imagine. But things are never never getting as simple in the reality as  in theory.

Even in cases where making proofs of concepts take only days, resolving side effects could take weeks, making things well polished could take months. And making content working well with this could take years.

Sometimes, things are simple, but I'm afraid it is not the case of this.

Quote
There was another article I was reading on voxel engines in general, about optimizations and cullings... and like for the surface of things, doing combining of continuous areas... so basically the central zone would result in drawing one plane with repeated textures instead of X flat planes of the tops each cube.... which then begins to lose effectiveness on the light blue height-map area with individual arbitrary spires. 

This is an interesting case illustrating the feature exclusion problem we were talking on above posts.

In one word, this method means adding an extra step to connect polygons to make larger ones.

The optimisation will be reducing polygon count and GPU load in exchange of more CPU load before compiling lists.

In Blackvoxel, the GPU polygon count is rarely a problem even on little machines. Execution of display lists are blazing fast.

The problem is the CPU load while refreshing a display list. And this method will increase the problem. And particularly with Blackvoxel because it's increased needs of frequent refresh.

So, this method is well suited for games that won't refresh too often the sectors.

Now, let's modifying some parameters. Let's decrease voxel size by 4 with the same visual limit (so increased number of sectors). The number of polygon will explode and  become a true problem.

That's mean an engine aiming to use little voxels need to use this kind of method.

But that's also mean it will be technically difficult to combine little voxels with MVI even if we don't count the exploding load on MVI itself.

So, that's explain why game choices condition others ones.

Because they are far more resource hungry than vector game engines, voxel engines will remain fragmented for a long time. So a lot of good different games rather than a "perfect one".

Quote
But some things can just be simply boxed with a collision box instead of being voxel-perfect collisions between arbitrarily mixed sized voxel entitites.

That could be acceptable for an entity character. But not for a spacecraft.

The Blackvoxel Team
« Last Edit: October 28, 2014, 04:02:22 am by olive »

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #16 on: October 28, 2014, 12:05:07 am »
Sorry d3xOr, we have deleted one of your posts by mistake  :(.

Unfortunately the database backup was made before this post.

It's remaining in the quotes of our last reply.

The Blackvoxel Team

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #17 on: October 28, 2014, 12:39:27 am »
(No worries  about deleted post :) )

I dunno, from an outsider perspective, having a lot of the system developed, like sector paging, allows focusing on detailed things like smoothing.

A brain as a tiny voxel world would only be a few sectors really... and really only rendered in a certain mode...
The brainboard itself has worked for a long time, even had a virtual world ages ago; but lost a lot of code around 2000.  The idea is certainly sound... right now I have just 2d gui's for laying them out... which is even sufficient... I have free-vector renderer even, so can dynaimcally script creating shapes and give them brains.... but; like you mentioned; doesn't do much good if there's no way for people to learn how it works... need tutorial levels for instance... in black voxel; how to make a sequencing conveyor system for instance... saw a conveyor system setup to sequence carbon and metal into furnaces...

I dunno... I have a voxel project I started several weeks ago, but realized there was a lot of managment system around that would have to be implemented... so started looking for source projects... found voxel.js which is a browser based javascript system (javascript!)... was pretty impressive how much performance they got... even a fairly pluggable system...
-------------
Me, I'm a software developer from '84 when I was 12.  I've seen a lot of 'this should be a simple idea' turn into 'that can't be implemented in a reasonable amount of time'  while I've also seen things that seemed really complex turn into less than 2 line code change.. But I'm more in the practice of implementing before fully fleshing out; if the skeleton doesn't work at all, not much use in fleshing it out.  And I remember when CPUs were all there was, and they weren't all that fast, and had 3d rendering code back then that was decent... then 3DFX game out with Glide, and things looked better, then nVidia bought them and killed Glide.  Which made me hate nVidia for a long time... I did notice that these opengl display lists are amazingly fast.  Probably going to take that and port it to my opengl 1 display layer... for opengl2 I was building vertex buffer object things, but with display lists it would record the change in textures too... had to build vertex buffers for each source texture (or texture combination)...  But; I've even hand coded assembly for optimization... although even the slowest processors today make it unnessecary for presentation/interaction purposes that I've let most of that code fall by the wayside about the time 64 bit processors came out... most compilers do clever optimizations.... but I'm well away of the difference between accessing a pointer/reference and just having a thing there... and the additional lookups required for what might on the surface seem like a simple operation... Every line is often written with 'is there a way to make this cheaper?' in mind...  Voxel engines definatly benefit from having memory resources, and doesn't require as much optimization as once upon a time (64k was tough to do much, 640k seemed like infinite memory... until it wasn't... )... but anyway; I'm certainly not a noob when it comes to development...

-----
Okay So I've associated a ZVoxelCuller_Basic:ZVoxel_Culler with ZRender_Basic and a ZVoxelCuller_Smooth:ZVoxel_Culler with ZRender_Smooth...

Code: [Select]
class ZVoxelCuller
{
protected:
   ZVoxelWorld *world;
public:
ZVoxelCuller( ZVoxelWorld *world )
{
this->world = world;
}

void SetWorld( ZVoxelWorld *world )
{
this->world = world;
}

virtual void InitFaceCullData( ZVoxelSector *sector ) = 0;

// if not internal, then is meant to cull the outside edges of the sector
virtual void CullSector( ZVoxelSector *sector, bool internal ) = 0;

virtual void CullSingleVoxel( ZVoxelSector *sector, int x, int y, int z ) = 0;

// get a single voxel's culling (copy sector); ULong value used even though some culling only uses 6 bits
virtual ULong getFaceCulling( ZVoxelSector *Sector, int offset ) = 0;
// set a single voxel's cullling (copy sector)
virtual void setFaceCulling( ZVoxelSector *Sector, int offset, ULong value ) = 0;
// Load culling from a stream
virtual bool Decompress_RLE(ZVoxelSector *Sector, void * Stream) = 0;
// save culling to a stream
virtual void Compress_RLE(ZVoxelSector *Sector, void * Stream) = 0;


};
which the moves code from ZVoxelWorld to an implementation of the virtual interface, then set the interface per sector.... added a ZVoxelCuller *culler; next to void *FaceCulling;

----------------
Nuts and bolts out of the way...

was thinking about how to implement reasonable alogrithms for culling... since corners will now need diagonal sectors to get information.

so... culling internal works
culling external should be broken into 3 parts... the face... if PartialCulling is set, process the inset face, without edes and corners.
Edge cubes should get updated when any of the other near three sectors get added... I guess only a subset of bits have to be set;
And corner cubes...

if I portion it out that way should simplify some iterations...

The basic culler should use a staggered cube algorithm... then only 50% of the cubes have to be checked...

Other thoughts I pursued a while ago was to make like a surface interface between cubes and non-cubes... so solid things wouldn't have to be considered... kinda similar to what sorted render does... that a list of 'interesting' voxels is produced.... but similar to the face conglomerating idea; it ends up causing more work than benefit in sparsely populated voxel sectors... although a normal landscape would basically boil down to 256 interesting points per sector... for a normal terrain mesh.. a few extra points for steep hills...

just musing

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #18 on: October 28, 2014, 04:29:00 am »
(No worries  about deleted post :) )

I dunno, from an outsider perspective, having a lot of the system developed, like sector paging, allows focusing on detailed things like smoothing.

A brain as a tiny voxel world would only be a few sectors really... and really only rendered in a certain mode...
The brainboard itself has worked for a long time, even had a virtual world ages ago; but lost a lot of code around 2000.  The idea is certainly sound... right now I have just 2d gui's for laying them out... which is even sufficient... I have free-vector renderer even, so can dynaimcally script creating shapes and give them brains.... but; like you mentioned; doesn't do much good if there's no way for people to learn how it works... need tutorial levels for instance... in black voxel; how to make a sequencing conveyor system for instance... saw a conveyor system setup to sequence carbon and metal into furnaces...

I dunno... I have a voxel project I started several weeks ago, but realized there was a lot of managment system around that would have to be implemented... so started looking for source projects... found voxel.js which is a browser based javascript system (javascript!)... was pretty impressive how much performance they got... even a fairly pluggable system...
-------------
Me, I'm a software developer from '84 when I was 12.  I've seen a lot of 'this should be a simple idea' turn into 'that can't be implemented in a reasonable amount of time'  while I've also seen things that seemed really complex turn into less than 2 line code change.. But I'm more in the practice of implementing before fully fleshing out; if the skeleton doesn't work at all, not much use in fleshing it out.  And I remember when CPUs were all there was, and they weren't all that fast, and had 3d rendering code back then that was decent... then 3DFX game out with Glide, and things looked better, then nVidia bought them and killed Glide.  Which made me hate nVidia for a long time... I did notice that these opengl display lists are amazingly fast.  Probably going to take that and port it to my opengl 1 display layer... for opengl2 I was building vertex buffer object things, but with display lists it would record the change in textures too... had to build vertex buffers for each source texture (or texture combination)...  But; I've even hand coded assembly for optimization... although even the slowest processors today make it unnessecary for presentation/interaction purposes that I've let most of that code fall by the wayside about the time 64 bit processors came out... most compilers do clever optimizations.... but I'm well away of the difference between accessing a pointer/reference and just having a thing there... and the additional lookups required for what might on the surface seem like a simple operation... Every line is often written with 'is there a way to make this cheaper?' in mind...  Voxel engines definatly benefit from having memory resources, and doesn't require as much optimization as once upon a time (64k was tough to do much, 640k seemed like infinite memory... until it wasn't... )... but anyway; I'm certainly not a noob when it comes to development...

-----
Okay So I've associated a ZVoxelCuller_Basic:ZVoxel_Culler with ZRender_Basic and a ZVoxelCuller_Smooth:ZVoxel_Culler with ZRender_Smooth...

Code: [Select]
class ZVoxelCuller
{
protected:
   ZVoxelWorld *world;
public:
ZVoxelCuller( ZVoxelWorld *world )
{
this->world = world;
}

void SetWorld( ZVoxelWorld *world )
{
this->world = world;
}

virtual void InitFaceCullData( ZVoxelSector *sector ) = 0;

// if not internal, then is meant to cull the outside edges of the sector
virtual void CullSector( ZVoxelSector *sector, bool internal ) = 0;

virtual void CullSingleVoxel( ZVoxelSector *sector, int x, int y, int z ) = 0;

// get a single voxel's culling (copy sector); ULong value used even though some culling only uses 6 bits
virtual ULong getFaceCulling( ZVoxelSector *Sector, int offset ) = 0;
// set a single voxel's cullling (copy sector)
virtual void setFaceCulling( ZVoxelSector *Sector, int offset, ULong value ) = 0;
// Load culling from a stream
virtual bool Decompress_RLE(ZVoxelSector *Sector, void * Stream) = 0;
// save culling to a stream
virtual void Compress_RLE(ZVoxelSector *Sector, void * Stream) = 0;


};
The problem was; implementing this means changing after each place that does a 'new ZVoxelSector' to init the culling... for instance the static working sectors...
it's all rather hideous... and maybe I should drop the whole thing... but probably won't but temporarily
Code: [Select]
ZFileSectorLoader::ZFileSectorLoader( ZGame *GameEnv )
{
this->GameEnv = GameEnv;
SectorCreator = new ZWorldGenesis( GameEnv );
  ReadySectorList   = new ZSectorRingList(1024*1024);
  EjectedSectorList = new ZSectorRingList(1024*1024);
  SectorRecycling   = new ZSectorRingList(1024*1024);
  VoxelTypeManager = 0;
  UniverseNum = 1;
  WorkingEmptySector = new ZVoxelSector;
  GameEnv->Basic_Renderer->GetCuller()->InitFaceCullData( WorkingEmptySector );
  WorkingEmptySector->Fill(0);
  WorkingFullSector = new ZVoxelSector;
  GameEnv->Basic_Renderer->GetCuller()->InitFaceCullData( WorkingFullSector );
  WorkingFullSector->Fill(1);
  Thread = 0;
  ThreadContinue = false;
}

which the moves code from ZVoxelWorld to an implementation of the virtual interface, then set the interface per sector.... added a ZVoxelCuller *culler; next to void *FaceCulling;

----------------
Nuts and bolts out of the way...

was thinking about how to implement reasonable alogrithms for culling... since corners will now need diagonal sectors to get information.

so... culling internal works
culling external should be broken into 3 parts... the face... if PartialCulling is set, process the inset face, without edes and corners.
Edge cubes should get updated when any of the other near three sectors get added... I guess only a subset of bits have to be set;
And corner cubes...

if I portion it out that way should simplify some iterations...

The basic culler should use a staggered cube algorithm... then only 50% of the cubes have to be checked...

Other thoughts I pursued a while ago was to make like a surface interface between cubes and non-cubes... so solid things wouldn't have to be considered... kinda similar to what sorted render does... that a list of 'interesting' voxels is produced.... but similar to the face conglomerating idea; it ends up causing more work than benefit in sparsely populated voxel sectors... although a normal landscape would basically boil down to 256 interesting points per sector... for a normal terrain mesh.. a few extra points for steep hills...

just musing

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #19 on: October 30, 2014, 03:15:22 am »
(No worries  about deleted post  :) )

I dunno, from an outsider perspective, having a lot of the system developed, like sector paging, allows focusing on detailed things like smoothing.

A brain as a tiny voxel world would only be a few sectors really... and really only rendered in a certain mode...
The brainboard itself has worked for a long time, even had a virtual world ages ago; but lost a lot of code around 2000.  The idea is certainly sound... right now I have just 2d gui's for laying them out... which is even sufficient... I have free-vector renderer even, so can dynaimcally script creating shapes and give them brains.... but; like you mentioned; doesn't do much good if there's no way for people to learn how it works... need tutorial levels for instance... in black voxel; how to make a sequencing conveyor system for instance... saw a conveyor system setup to sequence carbon and metal into furnaces...

This seems a very interesting project. When I was younger, artificial intelligence fascinated me.

Yes, there is a sequencer : http://www.blackvoxel.com/view.php?node=1457

I guess, maybe you need to program your own voxels ?

So :

The "active voxels" behavior is programmed in ZVoxelReactor.cpp.

At line 1971 is the sequencer code. But simpler voxels like water and falling rocks are easier to understand.

Don't worry about the voxel fetching code in these actual examples which is complicated to read.
You can also use the world's GetVoxel(), GetVoxelExt() and SetVoxel_WithCullingUpdate() to fetch and set Voxels.

If you want to store some data for your voxels, you need to create extensions. These are classes derived from ZVoxelExtension (See ZVoxelExtension_Sequencer.h) . Pointers to extensions are stored in the OtherInfo[] table of the sectors.

Active voxels needs also have a voxeltype class derived from ZVoxelType : See ZVoxelType_Sequencer.h

Then, your class should be instanced in ZVoxelTypeManager.cpp in function ZVoxelTypeManager::LoadVoxelTypes()

The last thing to do, is creating a voxelinfo_xxx.txt file in the voxelinfo directory with BvProp_Active=1.

I dont know if my explanations are sufficient. I think a tutorial should be made to create new voxels.

Quote
I dunno... I have a voxel project I started several weeks ago, but realized there was a lot of managment system around that would have to be implemented...

Yep, that's exactly the problem. That's all what is around that is the true huge work.

Quote
Me, I'm a software developer from '84 when I was 12.

Me too... :) , it seems that we have the same age and started at the same time!

I had an Atari 600xl for my first computer.

That was a fascinating machine designed by Jay Miner, who was also the father of the Commodore Amiga.

So, we know your are understanding what we mean when speaking of game history  ;) .

Quote
I've seen a lot of 'this should be a simple idea' turn into 'that can't be implemented in a reasonable amount of time'  while I've also seen things that seemed really complex turn into less than 2 line code change.. But I'm more in the practice of implementing before fully fleshing out; if the skeleton doesn't work at all, not much use in fleshing it out.

Yep, you are right on saying that sometime it's possible to find clever idea to make complex things in simple way.

For making multi grid, the idea would involve several systems. Not only rendering, but collision and game interaction. It's very unlikely to found fast ways for all the stuf, so there is an important incompressible work.

That said, that's always interesting to think and speak about things. That's giving us ideas on what we could plan in the future. And some of your propositions will make their way.

I've said that making some rework on rendering engine is on the roadmap. And we remembered that some math stuff need some rework (because of some Z-buffer problems when going very far).

Adding some features on this occasion won't add much work. And at first glance, it won't harm performance to add world level Matrix. And maybe we could add also some sector Matrix.

So, even if Blackvoxel won't use it at short term, that will be another capability for some future ideas.

Hoping some great ideas will come with time.

Of course, that wont be enough to make some multi grid effective in the game. There will remain some missing parts.

Quote
And I remember when CPUs were all there was, and they weren't all that fast, and had 3d rendering code back then that was decent... then 3DFX game out with Glide, and things looked better, then nVidia bought them and killed Glide.  Which made me hate nVidia for a long time... I did notice that these opengl display lists are amazingly fast.  Probably going to take that and port it to my opengl 1 display layer... for opengl2 I was building vertex buffer object things, but with display lists it would record the change in textures too... had to build vertex buffers for each source texture (or texture combination)...

Yep, display lists have their advantages when speaking of voxel rendering. At beginning we made several experiments with different approaches including VBO and Geometry Instancing. Display List appeared to be the best compromise for what Blackvoxel needed.  At last it was what we found at the time we made the engine.
But we have a lot of ideas of better things for the future.

Quote
But; I've even hand coded assembly for optimization... although even the slowest processors today make it unnessecary for presentation/interaction purposes that I've let most of that code fall by the wayside about the time 64 bit processors came out... most compilers do clever optimizations.... but I'm well away of the difference between accessing a pointer/reference and just having a thing there... and the additional lookups required for what might on the surface seem like a simple operation... Every line is often written with 'is there a way to make this cheaper?' in mind... Voxel engines definatly benefit from having memory resources, and doesn't require as much optimization as once upon a time (64k was tough to do much, 640k seemed like infinite memory... until it wasn't... )... but anyway; I'm certainly not a noob when it comes to development...

Yep, like old developers, you had the chance to do assembly. That's a chance we had at these old period of time.

Modern processors have incredible power compared to what we had in early time.

In many program, even game, there is no need to make great attention to optimization. In most cases, a programmer make it's program, almost never think about optimizing... and most time it will work fast enough.

When developing Blackvoxel, we quickly understood that what we wanted to do would be a very special case because of the MVI problem. It quickly appeared that it wouldn’t run fast enough at all. Without optimizing, things were far too slow. It appeared that we had to optimize strongly to achieve the goal.

This is the reason for the mania about performance in Blackvoxel. This is also why we have adopted in this project a specific coding style much "close to the machine" . Although we know that compilers can optimize many things, staying close to the machine is helping to avoid writing too heavy things without realizing it.

We know however that there is a downside to this. Like said, whatever way a programmer will choose, there will always be a price to pay...

Quote
-----
Okay So I've associated a ZVoxelCuller_Basic:ZVoxel_Culler with ZRender_Basic and a ZVoxelCuller_Smooth:ZVoxel_Culler with ZRender_Smooth...

Code: [Select]
class ZVoxelCuller
{
protected:
   ZVoxelWorld *world;
public:
    ZVoxelCuller( ZVoxelWorld *world )
    {
        this->world = world;
    }

    void SetWorld( ZVoxelWorld *world )
    {
        this->world = world;
    }

    virtual void InitFaceCullData( ZVoxelSector *sector ) = 0;

    // if not internal, then is meant to cull the outside edges of the sector
    virtual void CullSector( ZVoxelSector *sector, bool internal ) = 0;

    virtual void CullSingleVoxel( ZVoxelSector *sector, int x, int y, int z ) = 0;

    // get a single voxel's culling (copy sector); ULong value used even though some culling only uses 6 bits
    virtual ULong getFaceCulling( ZVoxelSector *Sector, int offset ) = 0;
    // set a single voxel's cullling (copy sector)
    virtual void setFaceCulling( ZVoxelSector *Sector, int offset, ULong value ) = 0;
    // Load culling from a stream
    virtual bool Decompress_RLE(ZVoxelSector *Sector, void * Stream) = 0;
    // save culling to a stream
    virtual void Compress_RLE(ZVoxelSector *Sector, void * Stream) = 0;


};
which the moves code from ZVoxelWorld to an implementation of the virtual interface, then set the interface per sector.... added a ZVoxelCuller *culler; next to void *FaceCulling;

----------------
Nuts and bolts out of the way...

was thinking about how to implement reasonable alogrithms for culling... since corners will now need diagonal sectors to get information.

so... culling internal works
culling external should be broken into 3 parts... the face... if PartialCulling is set, process the inset face, without edes and corners.
Edge cubes should get updated when any of the other near three sectors get added... I guess only a subset of bits have to be set;
And corner cubes...

Yep, that's a more complex algorithm and it needs more sectors. That's a great work.

Full Culling for actual unsmoothed voxels are working with 7 sectors (1 + 6 adjacent).

With need of diagonal voxel data, it will need 27 sectors (1 + 26 adjacent).

Quote
if I portion it out that way should simplify some iterations...

The basic culler should use a staggered cube algorithm... then only 50% of the cubes have to be checked...

Yep, as you said it can do a significant difference, it sounds interesting.

Quote
Other thoughts I pursued a while ago was to make like a surface interface between cubes and non-cubes... so solid things wouldn't have to be considered... kinda similar to what sorted render does... that a list of 'interesting' voxels is produced.... but similar to the face conglomerating idea; it ends up causing more work than benefit in sparsely populated voxel sectors... although a normal landscape would basically boil down to 256 interesting points per sector... for a normal terrain mesh.. a few extra points for steep hills...
just musing

I take this opportunity to mention that there are currently two different versions of the rendering system. A flag in the sector (Flag_NeedSortedRendering) may activate one or another. The sector generator decides which render type must be used on a particular sector.
The second version adds sorting by voxel type. Depending on zones, sorted rendering is much faster... or reduce performances. The sorting algorithm is supposed to be activated in the areas with exposition to view of many vertical voxeltype alternances.
I'll modify Blackvoxel by changing this flag to an integer enum type. So, Blackvoxel will support the possibility of having many rendering types for the same world.

The Blackvoxel Team

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #20 on: October 30, 2014, 07:39:24 am »
Thanx for your long, detailed response.
Speaking of convering to enum...
It is better to use enums for defines... like the set of FACEDRAW_ values can all be assoicated by putting them in an enum; I dunno; maybe it's more useful in C#... then you reference the enum type and all the values you can pick are available in a drop list.  I guess the other reason I started converting defines to enums was for Doc-o-matic code documentation tool; then all the values would be groups in the same documentation page.  To the compiler enums are constants, so there's no performance hit.

I do appreciate the care for optimization during development...it's apparent to me as I was learning where things were.
My only criticism is ... some lines are too long :)  When doing line breaks on long lines, operators should be prefixed on the continuation line and not left on the tail of the prior line... makes reading the expression easier becuase you can see what the op is, inc ase the line is just a little too long; also makes it clear whether a line is a new line (no operator prefix) or a continuation....  This is a habit noone uses or teaches; but I've found of great benefit since early 2000's...

Re: multiple rendering paths... yes, I saw the multiple paths, but the difference is how the voxels are iterated, one uses a foreach(x,y,z) the other uses foreach( in sorted list) and then did the same drawing... so I broke out the triangle emitter into a separate function that both use. 
Code: [Select]
void ZRender_Smooth::EmitFaces( ZVoxelType ** VoxelTypeTable, UShort &VoxelType, UShort &prevVoxelType, ULong info
  , Long x, Long y, Long z
  , Long Sector_Display_x, Long Sector_Display_y, Long Sector_Display_z )
https://github.com/d3x0r/Blackvoxel/blob/master/src/ZRender_Smooth.cpp#L670

Guess I ended up reverting that in ZRender_Basic; so it's back to mostly original code (added culler init)

---------
My issue is... I realize that voxelSector should be a low level class, and not know things like World.. and now after each new ZVoxelSector    another call to InitFaceCullData has to be made... makes it less clean to just get a sector... need like a sector factory in render or world ... one in render could could use one in world.... But then; Render right now is a game(?) a world(?) ... If it was a factory from render, than smooth sectors could be requested or basic...
------
I was having issues getting yaw/pitch/roll reverse calculation working, so I ended up making zmatrix.cpp for those so I could just recompile the one source, since basic types get included by like everything, a change was having to
rebuild everything; need to get rid of .cpp and make the yaw/pitch/roll inlines.  *done*  still some fixups to do in planePhysics I think...
-----
Oh, was playing blackice, and like most FPS they have a pitch lock at +90 degrees... was playin in modified blackvoxel for so long, expected the camera to auto rotate as the angle went over my head.  I'm not 100% sure how this works... and I can't seem to get to exactly 90 degrees up or down, and I'm not sure what would happen if you did get there... since I'm always just a hair to the left or right of exactly down, it results in a roll that undoing results in turning you around in yaw... it's really kinda natural... anyway the snap to 'up' annoyed me yet again... it's not like that actually saves you anything, since you later have to compute sin/cos values to apply motion instead of just having the axis array (matrix) available to use....
I thought quaternions would be way complex; but their usage is really to and from storage, and is a way to smoothly translate one rotation matrix to another through a simple linear interpolation of the quaternaion (which is just a vec4) that can be read/written to the 3x3 rotation matrix.  (matrix is 4x4 cause they have a translation/origin row)... would be nice if d3d and opengl used sane matrixes... layed out so the X/y/z axis normal-vectors are lines up as an array of 3 values so you get the vectors immediately without applying a row-pitch offset thing...
------------
Quote
Yep, that's a more complex algorithm and it needs more sectors. That's a great work.

Full Culling for actual unsmoothed voxels are working with 7 sectors (1 + 6 adjacent).

With need of diagonal voxel data, it will need 27 sectors (1 + 26 adjacent).
but... marching cubes is the 8 bits... which is the 8 corners...  I guess
and I don't need what's diagonal...so the 8 corner cubes from the center... because those don't affect the shape of this or any of the near cubes... so only 18 (6 original, 12 in the 6 ordinals from each of the original 6 ( accessable by 2 names, FRONT_HAS_ABOVE, and ABOVE_HAS_FRONT is the same cube..... but since I don't use the diagonals in the center, then I don't need diagonals from the first mates either... but the I do use 18 bits instead of 8

----------------------------


Glad to meet you :)  don't actually have many peers :)
can I add you on facebook maybe?  I'd share my name, but it's not like John smith... it would immediately indicate a individual so it's like posting my SSN# here :)
Uhmm... I had a TI-99/4a with speech synthesizer (why don't I have a simple speech synthesizer today?)  then several years later a Tandy 1000 (no letters after, original - PC jr clone) and turbo pascal
« Last Edit: October 30, 2014, 07:51:27 am by d3x0r »

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #21 on: November 01, 2014, 01:20:42 am »
Thanx for your long, detailed response.
Speaking of convering to enum...
It is better to use enums for defines... like the set of FACEDRAW_ values can all be assoicated by putting them in an enum; I dunno; maybe it's more useful in C#... then you reference the enum type and all the values you can pick are available in a drop list.  I guess the other reason I started converting defines to enums was for Doc-o-matic code documentation tool; then all the values would be groups in the same documentation page.  To the compiler enums are constants, so there's no performance hit.

We used enum a lot in Blackvoxel, but also some define  ;).

Our IDE doesn't work well with "auto completion" on enum types parameters (but it work well on most things).

We had some issues while debuging with enums that doesn't make them comfortable because the debugger only show enum name and not the value. So, in some cases, we used define.

Another problem with enum is the inability to specify underlying integer type other than the default int. That's why we rarely use "enum type" with enums. In some case that's because we want to use different storage size. But also for fixing integer size to avoid 32/64 bits issues. C++11 enables to specify enum storage type, but we don't want to switch to this yet.

About documentation, we hope we'll have time to write it one day  ;).

Quote
I do appreciate the care for optimization during development...it's apparent to me as I was learning where things were.
My only criticism is ... some lines are too long  :)  When doing line breaks on long lines, operators should be prefixed on the continuation line and not left on the tail of the prior line... makes reading the expression easier becuase you can see what the op is, inc ase the line is just a little too long; also makes it clear whether a line is a new line (no operator prefix) or a continuation....  This is a habit noone uses or teaches; but I've found of great benefit since early 2000's...

We agree with you on the interest of putting the operators before on line break cases.

About "long lines", we didn't like "over spreaded" code : we found it to be deconcentrating.

Rather than using a single dimension, we write code on 2 dimensions and get it more compact. So can view more of it at the same time.

Prioritarily, the most important code spreads verticaly to get more readable and the less important spreads horizontaly for avoiding bloating as much possible.

But we know every programmer would have have their own personnal opinion on coding style and presentation. Whatever arguments, that's always a matter of personnal taste and programming style.

Quote
Re: multiple rendering paths... yes, I saw the multiple paths, but the difference is how the voxels are iterated, one uses a foreach(x,y,z) the other uses foreach( in sorted list) and then did the same drawing... so I broke out the triangle emitter into a separate function that both use.
Code: [Select]
void ZRender_Smooth::EmitFaces( ZVoxelType ** VoxelTypeTable, UShort &VoxelType, UShort &prevVoxelType, ULong info
                              , Long x, Long y, Long z
                              , Long Sector_Display_x, Long Sector_Display_y, Long Sector_Display_z )
https://github.com/d3x0r/Blackvoxel/blob/master/src/ZRender_Smooth.cpp#L670

Guess I ended up reverting that in ZRender_Basic; so it's back to mostly original code (added culler init)

The render code is considered as a critic portion. So adding a function call in a loop should be avoided.

There is 16x16x64 voxels on a sector, so it's making 16384 functions calls.

If the engine needs to render 10 sectors per frame and 60 frames per second, that would do :

16384 * 10 * 60 = 9830400 function calls per second.

In fact the relative overhead must be relativized because the called function is very long and have a lot of stuff in it. But that's always an overhead that could be avoided.

You could always inline the code to remove this overhead. But some compilers may also choose to ignore this on their own criteria. Inlining is never guaranteed.

So, in parts where you want to be sure the things will be compiled like you want to in a fully predicable way, the best is to write them like they must be compiled without making any suppositions on what optimizations compiler could do.

In doint so, you'll also eliminate possible issues on performance depending on inlining heuristic oddities on a particular compiler.

Of course, I would recommand to do such methods only on critical code sections that realy need maximum speed because reducing code factorization also have bad counterparts.

Quote
---------
My issue is... I realize that voxelSector should be a low level class, and not know things like World.. and now after each new ZVoxelSector    another call to InitFaceCullData has to be made... makes it less clean to just get a sector... need like a sector factory in render or world ... one in render could could use one in world.... But then; Render right now is a game(?) a world(?) ... If it was a factory from render, than smooth sectors could be requested or basic...
------

Video games are very particular programs. That's the kind of stuff that would rarely end up to be coded in a perfect academic ways.

In a game, there is a lot of inner interactions, a lot of technical complexity, a lot of technical constraints, permanent evolutions caused by new ideas.

Doing anything require tough decisions to balance between a lot of conflicting considerations.

In most cases, even the best decision will only lead to the best compromise.

Whatever you do, that will always let you with an impression of imperfection.

Quote
I was having issues getting yaw/pitch/roll reverse calculation working, so I ended up making zmatrix.cpp for those so I could just recompile the one source, since basic types get included by like everything, a change was having to
rebuild everything; need to get rid of .cpp and make the yaw/pitch/roll inlines.  *done*  still some fixups to do in planePhysics I think...
-----
Oh, was playing blackice, and like most FPS they have a pitch lock at +90 degrees... was playin in modified blackvoxel for so long, expected the camera to auto rotate as the angle went over my head. I'm not 100% sure how this works... and I can't seem to get to exactly 90 degrees up or down, and I'm not sure what would happen if you did get there... since I'm always just a hair to the left or right of exactly down, it results in a roll that undoing results in turning you around in yaw... it's really kinda natural... anyway the snap to 'up' annoyed me yet again... it's not like that actually saves you anything, since you later have to compute sin/cos values to apply motion instead of just having the axis array (matrix) available to use....
I thought quaternions would be way complex; but their usage is really to and from storage, and is a way to smoothly translate one rotation matrix to another through a simple linear interpolation of the quaternaion (which is just a vec4) that can be read/written to the 3x3 rotation matrix.  (matrix is 4x4 cause they have a translation/origin row)... would be nice if d3d and opengl used sane matrixes... layed out so the X/y/z axis normal-vectors are lines up as an array of 3 values so you get the vectors immediately without applying a row-pitch offset thing...
------------
but... marching cubes is the 8 bits... which is the 8 corners...  I guess
and I don't need what's diagonal...so the 8 corner cubes from the center... because those don't affect the shape of this or any of the near cubes... so only 18 (6 original, 12 in the 6 ordinals from each of the original 6 ( accessable by 2 names, FRONT_HAS_ABOVE, and ABOVE_HAS_FRONT is the same cube..... but since I don't use the diagonals in the center, then I don't need diagonals from the first mates either... but the I do use 18 bits instead of 8

It's possible we'll use quaternions in the future, at last for some kind of airplane or rocket that needs to avoid gimball lock. As you said, it could have pro and cons. A lot of stuff could be made without.
For the airplane, fine tuning and balancing it's behavior is a long job. Tuning took in fact longuer than implementing the aircraft itself.

----------------------------

Quote
Glad to meet you  :)  don't actually have many peers  :)
can I add you on facebook maybe?  I'd share my name, but it's not like John smith... it would immediately indicate a individual so it's like posting my SSN# here :)

We are also glad to meet you  :)

Yes, you can add us on Facebook. We use the Blackvoxel account as we don't have personnal Facebook accounts. (But our name and the company info are here : http://www.blackvoxel.com/view.php?node=14)

Quote
Uhmm... I had a TI-99/4a with speech synthesizer (why don't I have a simple speech synthesizer today?)  then several years later a Tandy 1000 (no letters after, original - PC jr clone) and turbo pascal

I remember TI-99, but it wasn't very common in our country. Here, most computers were C64, Atari, Amstrad and the local manufacturer, Thomson (These integrated light pen).
At that time, We were programming in Basic and assembly language. Also doing some electronic.
C langage come few years later with Commodore Amiga.

The Blackvoxel Team

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #22 on: November 01, 2014, 09:40:28 am »

The render code is considered as a critic portion. So adding a function call in a loop should be avoided.

There is 16x16x64 voxels on a sector, so it's making 16384 functions calls.

If the engine needs to render 10 sectors per frame and 60 frames per second, that would do :

16384 * 10 * 60 = 9830400 function calls per second.
True enough.... but it's at least 16 calls that function makes itself... to emit the 4 points with 4 texture coords, setup primitive etc... and that's 1 face of 1 voxel... so figure perfect flatness... 256*16 ... so 1 in 4096 increase in function calls is not significant :)  Edit: I got that a little wrong... If something NEEDs to be inlined... there's always Define's :)   Conveted some of the emitting to defines so I could have consistant texture coord references ... like point 0, face 0,1,2 from it has a texture coord defined for it....

 If a compiler can't do the job 'right' then don't use it...  LCC for instance; wicked fast, but linker cripples real usage.  Wish I could play with icc (intel) but they're so closed for some reason.

It's possible we'll use quaternions in the future, at last for some kind of airplane or rocket that needs to avoid gimball lock. As you said, it could have pro and cons. A lot of stuff could be made without.
For the airplane, fine tuning and balancing it's behavior is a long job. Tuning took in fact longuer than implementing the aircraft itself.
Well Again, quaternions shouldn't be 'used' ... since to be applied for transformation it needs to convert to a 3x3 matrix... might as well just keep the matrix.  linear interpolation only counts for follow cams from arbitrary points to other arbitrary points... but mostly a follow cam will be a relative translation of an existing view camera and not really require quaternion representation either... and the 4 coordinates don't translate into something comprehendable... so expressing any constant quat-vec4's is not really possible... just to retrieve from an existing rotation 3x3 state.... well i,j,k,l vectors work; but once they get combined multiple coordinates change at once for a simple rotation.
-----------------------
But; I guess I'm really looking at the wrong scale of things...
I know you mentioned some things already... but how do I really add a new block behavior?  Maybe creation of a block should create a voxel body :)  A voxel brontasaurus or something... so a simple block spawns all of the others when created... and generate motion in voxel units... was considering a ground displacement for footprints ... so many thoughts.

The bomb in black forest uses world distance for detection and transformation into red liquid distance.... the x64 factor for reduced world size caused much too many red blocks to be created :)

Also attempted to simply add more threads to process things; wasn't as simply extendable as I would have hoped :) 
InterlockedExchange()/*windows, not sure what linux equiv is... gcc __asm...*/... "lock xchg ax, [mem] " is a cheap multiprocessor safe lock.  for things like queues and lists to add/remove items to process, a simple spin-on-lock with a Relinquish() ... sched_yeild() /*linux*/ or Sleep(0); /* windows */ is sufficient and doesn't require things like scheduling.. like if( locked ) sleep( until_lock_is_unlocked ); ..

static size_t lock;  /* register sized variable */
while( locked_exchange( &lock, 1 ) ) { Relinquish(); };
/* do stuff */
lock = 0;
« Last Edit: November 01, 2014, 09:44:44 am by d3x0r »

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #23 on: November 02, 2014, 09:06:03 am »
The other example of 'poor relationships' I ran into is C# has a type called DataTable, which is a representation of a SQL table, with dynamic columns with types and rows, etc.  But datatables contain columns and rows but all columns know the datatable they are in, and all rows know the datatable, also datatables can be contained in datasets which is a group of datatables and adds the foriegn key relationships between tables.  So from any row you can get to any other row in any other table that row is remotely related to... so there's sometimes merits of having... say worlds know all sectors, but sectors know their world, and hence their renderer...  or something.

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #24 on: November 03, 2014, 02:26:44 am »
True enough.... but it's at least 16 calls that function makes itself... to emit the 4 points with 4 texture coords, setup primitive etc... and that's 1 face of 1 voxel... so figure perfect flatness... 256*16 ... so 1 in 4096 increase in function calls is not significant  :)  Edit: I got that a little wrong... If something NEEDs to be inlined... there's always Define's   :)  Conveted some of the emitting to defines so I could have consistant texture coord references ... like point 0, face 0,1,2 from it has a texture coord defined for it....

Ther is 14 function call per face, so 84 calls for all faces. So, roughly 1/100 of function calls. But lot of parameters, so much like 1/50 of the calls. Sure, it's not huge.
The idea is more to code critical pieces of code in the spirit of doing the maximum optimisations.

Defined code are often used for code snippet. But not very handy for large portions of code. Have also some drawbacks.

We also studied the idea of ​​meta programming. But there are also disadvantages.

Quote
If a compiler can't do the job 'right' then don't use it... LCC for instance; wicked fast, but linker cripples real usage.  Wish I could play with icc (intel) but they're so closed for some reason.

Doing the job "right" for a compiler could also mean to take into account a lot of contradictory considerations because modern CPU are very complex.

As an example, a compiler must avoid over-inlining big pieces of code in every portion of non critical code because increasing all the code too much could lead to bad cache usage and worse performances.

But an heavily used little piece of code is a very special case that could suffer of this general behaviour.

The problem is that even the best compiler have no real mean to know if a code portion is critical. It's decisions are based on "general considerations". That's optimal for general code, but might not be the best for special cases. That's why the need to help compiler sometime.

Of course, there is a lot of other mean to help compiler in such cases. Some compilers have a lot of flags for tuning. There is special non standard instructions, like the "always_inline" of gcc.

Profiling is also used to help compiler to know what pieces of code are heavily used and need to change it's rules.

But all these mean have big drawbacks. Profiling add a complicated phase, special compiler flags and instructions aren't working across compilers.

Here again, whatever way mean a price to pay.

Quote
Well Again, quaternions shouldn't be 'used' ... since to be applied for transformation it needs to convert to a 3x3 matrix... might as well just keep the matrix.  linear interpolation only counts for follow cams from arbitrary points to other arbitrary points... but mostly a follow cam will be a relative translation of an existing view camera and not really require quaternion representation either... and the 4 coordinates don't translate into something comprehendable... so expressing any constant quat-vec4's is not really possible... just to retrieve from an existing rotation 3x3 state.... well i,j,k,l vectors work; but once they get combined multiple coordinates change at once for a simple rotation.

I think it's a good advice. We'll certainly follow it.

Quote
But; I guess I'm really looking at the wrong scale of things...
I know you mentioned some things already... but how do I really add a new block behavior?  Maybe creation of a block should create a voxel body  :)  A voxel brontasaurus or something... so a simple block spawns all of the others when created... and generate motion in voxel units... was considering a ground displacement for footprints ... so many thoughts.

Yep, that's exactly the principles of how things are working with MVI.

In Blackvoxel, a voxel can have a program defining it's behavior, that could be called a voxel-shader.

In this program, you can manipulate other voxels around : make animations, chemical reactions, transformations.... nearly everything is possible.

Yes there is amazing things to do with this.

Making a "block behavior" means simply add a the behavior code in ZVoxelReactor.cpp switch(case is the voxeltype). Then add the BvProp_Active=1 statement in the voxelinfo corresponding file.

Like I said in a post some days ago, there is also some other classes to add if your voxel need to have it's own data storage for internal state, inventory or whatever you want.

We think we'll make this stuff simpler in the future by adding an "On_Execution" kind of function in ZVoxelType. This is less efficient, but only massively used voxels needs high performances.

Quote
The bomb in black forest uses world distance for detection and transformation into red liquid distance.... the x64 factor for reduced world size caused much too many red blocks to be created  :)

Also attempted to simply add more threads to process things; wasn't as simply extendable as I would have hoped  :)
InterlockedExchange()/*windows, not sure what linux equiv is... gcc __asm...*/... "lock xchg ax, [mem] " is a cheap multiprocessor safe lock.  for things like queues and lists to add/remove items to process, a simple spin-on-lock with a Relinquish() ... sched_yeild() /*linux*/ or Sleep(0); /* windows */ is sufficient and doesn't require things like scheduling.. like if( locked ) sleep( until_lock_is_unlocked ); ..

static size_t lock;  /* register sized variable */
while( locked_exchange( &lock, 1 ) ) { Relinquish(); };
/* do stuff */
lock = 0;

Blackvoxel use such concepts in message files used for inter thread communications. We used slightly different instructions as we are doing "lockless" working(while ensuring multithread correctness).
In Gcc there is "compiler intrinsics" that are special functions converted directly to assembly. These intrinsics can be translated on the right instructions with different processors.

So, we used the  __sync_bool_compare_and_swap() intrinsic for our stuff.

As what I could read on the web, the InterlockedExchange() equivalent should be the _sync_lock_test_and_set() intrinsic on Gcc. But it must be verified.

Be careful to add the -march=i686 flag on windows gcc, otherwise, the intrinsics won't compile.

As an information, Blackvoxel was nearly entirely developed on Linux. Even the Windows package is compiled on Linux :).

The Blackvoxel Team
« Last Edit: November 03, 2014, 02:33:06 am by olive »

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #25 on: November 03, 2014, 09:22:01 pm »
compare_swap_and_xchg and InterlockedExchange are similar; other than the function results...
I see; that is one thing I had an issue with when porting; think I made it thread unsafe...

Here's something I was playing with - surround-o-vision

http://youtu.be/R2izTRpP2kM 

each display is actually independant, and should build its display list appropriate for its view direction, but right now, everything is added to every display list, and 6 windows shown each rendering pass... at very low frame rates there is frame tear between the displays....

More optimal would be to target 3 monitor, and just show forward left and right.. a single display stretched across 3 displays is not right, and perspective gets distorted badly.

----
So ya I'll turn my attention more to voxel based operations... I dunno just feel I'm missing something... like I saw the voxel engine

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #26 on: November 04, 2014, 12:23:39 pm »
I I continued to do some merging, and ... I have gui popup displays rendering... but as soon as I do the textures ...
*and as I typed that I realized what it might have been*
So the last thing that's being done is drawing a black text output, which left glColor set at 0,0,0.. and apparently whatever the last color before the swap is, is what the glCallList lists use... so everything seemed to be there, but it was all black... like the geometry opaqued my displays in the depth buffer... but had no color themselves... finally reset the color back to 1,1,1 before swap and now

.. *deleted*. http://youtu.be/V9TR1w2yEtg

Doh; stupid opengl window doesn't record right.
« Last Edit: November 04, 2014, 12:25:10 pm by d3x0r »

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #27 on: November 05, 2014, 03:22:27 am »
The other example of 'poor relationships' I ran into is C# has a type called DataTable, which is a representation of a SQL table, with dynamic columns with types and rows, etc.  But datatables contain columns and rows but all columns know the datatable they are in, and all rows know the datatable, also datatables can be contained in datasets which is a group of datatables and adds the foriegn key relationships between tables.  So from any row you can get to any other row in any other table that row is remotely related to... so there's sometimes merits of having... say worlds know all sectors, but sectors know their world, and hence their renderer... or something.

True. It's a good idea.

compare_swap_and_xchg and InterlockedExchange are similar; other than the function results...
I see; that is one thing I had an issue with when porting; think I made it thread unsafe...

Yes, thread unsafe code can easily create weird instabilities. The problem with a performance oriented program is that we are trying to avoid any form of lock whenever possible. The difficulty is to avoid missing the cases where it won't work.

Quote
Here's something I was playing with - surround-o-vision

http://youtu.be/R2izTRpP2kM

each display is actually independant, and should build its display list appropriate for its view direction, but right now, everything is added to every display list, and 6 windows shown each rendering pass... at very low frame rates there is frame tear between the displays....

It's fun. We see that it can do many things.  :)

Quote
More optimal would be to target 3 monitor, and just show forward left and right.. a single display stretched across 3 displays is not right, and perspective gets distorted badly.

For doing it right, it would need entering some parameters for each screen. As far as I know, only some flight simulators support some kind of advanced multi screen.

Quote
So ya I'll turn my attention more to voxel based operations... I dunno just feel I'm missing something... like I saw the voxel engine

Yep, the most interesting part of Blackvoxel is on the MVI side...

That's the core of the game interest.

The rendering engine is designed mainly to be fast and efficient in order to serve MVI.  :)

The Blackvoxel team

d3x0r

  • Jr. Member
  • **
  • Posts: 75
    • View Profile
Re: How hard could smoothing be?
« Reply #28 on: November 09, 2014, 12:46:03 am »

This is a code snippet that's mostly standalone; use of libpng to read/write images.  1/2 is read 1/2 is write.
basically my image stucture is
struct image {
  int x, y;
  unsigned int w, h;
  byte[4] *image; // color data
}


https://code.google.com/p/c-system-abstraction-component-gui/source/browse/src/imglib/pngimage.c

several editors on windows support png and the alpha transparent layer... none (few); probably you use gimp?  support 32 bit alpha saving.  I Have a command line utility that converts 24 to 32; but makes the alpha channel opaque. 

png is like bmp that it is lossless... but it is a zip compression per channel...  works good for RLE encodable images basically... or mostly constant images... but does have 1 byte alpha channel support.

can google 'sample libpng read' which is what my code was based on... which is mostly copy and pastable....

----------
On image loading; I open the file memory mapped and pass the pointer to that memory to a routine that trys passing it to several loader routines, which then result with an image... basically if( !ZBitmapImage.PNGLoad(file) ) if( !ZBitmapImage.BMPLoad( file ) ) ... if (!... JPGLoad() ) ...

can just pass the open file and rewind it between....
SDL has an image loading library...
FreeImage is a LARGE library that loads just about everything... but it's like 2M .. png is a few hundred K
-----------

olive

  • Administrator
  • Full Member
  • *****
  • Posts: 149
    • View Profile
Re: How hard could smoothing be?
« Reply #29 on: November 12, 2014, 11:54:37 pm »
This is a code snippet that's mostly standalone; use of libpng to read/write images.  1/2 is read 1/2 is write.
basically my image stucture is
struct image {
  int x, y;
  unsigned int w, h;
  byte[4] *image; // color data
}

https://code.google.com/p/c-system-abstraction-component-gui/source/browse/src/imglib/pngimage.c

several editors on windows support png and the alpha transparent layer... none (few); probably you use gimp?  support 32 bit alpha saving.  I Have a command line utility that converts 24 to 32; but makes the alpha channel opaque.

png is like bmp that it is lossless... but it is a zip compression per channel...  works good for RLE encodable images basically... or mostly constant images... but does have 1 byte alpha channel support.

can google 'sample libpng read' which is what my code was based on... which is mostly copy and pastable....

----------
On image loading; I open the file memory mapped and pass the pointer to that memory to a routine that trys passing it to several loader routines, which then result with an image... basically if( !ZBitmapImage.PNGLoad(file) ) if( !ZBitmapImage.BMPLoad( file ) ) ... if (!... JPGLoad() ) ...

can just pass the open file and rewind it between....
SDL has an image loading library...
FreeImage is a LARGE library that loads just about everything... but it's like 2M .. png is a few hundred K
-----------

Gimp is what we recommend for texture working with Blackvoxel. That's a very powerfull program.

But any other major image editor will do the job (and provide support BMP32.)

You are right to say uncompressed BMP take more space on the disk. But is it really a problem ?

At this time, the place taken by blackvoxel on hard disk stay low compared to many games. As images are mostly textures, we'll run out of GPU RAM well before hard disk space become a problem.

And for web distribution, packages are compressed anyway.

After all, image loading isn't a central functionnality in a game. That's common to have limited support of texture formats.

There is pro and cons for compressed formats. In one hand, we'll gain space, on other, we'll add a little delay to startup time.

But more complex format mean also more compatibility problems.

And image library are typicaly the pieces of code that need evolution and maintenance over time.

Some months ago, Gimp changed to a new BMP format revision. And we had to update the image loader.

But at last, as we written it, we were able to fix the problem very quickly.

So, as there is pro and cons in the story, we'll have to think about that idea.

The Blackvoxel Team