The key is to find and be the leader of the next generation.. something radically different and better. Maybe that's EQN, or maybe it something else, but everything points to voxel worlds and what they make possible, having ushered in the next generation of MMO's.
Is there any reason to use voxels other than not knowing what a manifold is? That's a serious question, not a rhetorical one.
ive honestly been wondering if there were some kind of coder breakthrough or other tech breakthrough that has made it so attractive lately to developers both indie and full studio.
oh. and i have no idea what a manifold is except ive seen it mentioned in math and physics texts that i have tried to read on my own ... obviously with little success.
Geometry shaders (introduced in DirectX 10 and then OpenGL 3.2) make a huge difference, as they let you do a ton of computations GPU-side.
The key is to find and be the leader of the next generation.. something radically different and better. Maybe that's EQN, or maybe it something else, but everything points to voxel worlds and what they make possible, having ushered in the next generation of MMO's.
Is there any reason to use voxels other than not knowing what a manifold is? That's a serious question, not a rhetorical one.
And how do you destroy any object in thousand pieces, or even in any 2 pieces with manifolds. and i guess you talk about 2-manifold. I don't even see where those could either be replaces or are even in the same category. It is like you talking about vegetables and i am talking about fruits.
And will be voxel the future? I don't know a lot of people thought after Comanche (1992) voxels would be utilized a lot more.. it wasn't the case. Well now Carmack's new engine ID6 will be based on voxels. And i do see potential for voxel's as you can easier manipulate a 3D world based of voxels instead of meshes. But 3D cards have to follow that trend..
If you want an object to shatter, have pieces flying every which direction, and then disappear, that's pretty easy if you're using DirectX 10 or OpenGL 3.2 or later, even if you're trying to patch it on to current techniques. Pass a uniform with the time (in milliseconds) since the object has shattered so that pieces know how far to move, then in geometry shaders, pick a random orientation and speed to rotate and velocity to move for each triangle. Instead of passing through the vertex coordinates that enter the geometry shaders unchanged, move and rotate the whole triangle by whatever amount that triangle needs to be moved. You'll probably want to recompute the normal vector after moving the triangle, and use flat interpolation rather than smooth for the normal vector.
And if you're already extensively using tessellation DirectX 11 or OpenGL 4.0 or later, then it's even easier. Rather than having to take your base vertex data, you can decide how many triangles you want to shatter the object into, set the tessellation degree accordingly, and then proceed as above. There will be a bit of a performance hit GPU-side to shatter the object, but we might be in the range of a 20% hit to your frame rate if you wanted everything in the entire game world to shatter simultaneously. For just one or two things at a time, it's not a big deal. CPU-side, the extra computations to do this amount to a rounding error.
Now, if you want the shards of the exploding object to have some physics attached (e.g., land on the ground and then stop, bounce off of walls, or bounce off of each other), then you're probably going to have to do a ton of work CPU-side, and that's going to be a problem regardless of your rendering method. I could see how voxels would mitigate the problem some there, but it's going to take an awful lot of explosions (e.g., bullet hell quantities) for that to overcome the disadvantages of using voxels everywhere else. Even if this is a huge priority, it might well be easier to convert your model to voxels when it explodes and proceed from there, without using voxels for anything other than explosions.
Here's a quick screenshot of something I posted several months ago that is kind of related, but not exactly what you're talking about:
You can click on it for a larger view.
The total amount of vertex data for all of the flames there consists of a single float, 0.0f. If you want to have such particle effects of varying sizes, colors, and numbers of particles all over the game world, they can still all share exactly the same base vertex data and pass in the other few parameters as a uniform. And then the program is run exactly once to draw all of the flames in the picture. The CPU-side work amounts to rotating the whole stack of flames once, having to tell the GPU the system time of the start of the frame once per frame (to know how far to move each flame piece), and passing some uniforms to the GPU once again every frame. The CPU does that once, treating the flames as a single solid object, not separately for each of the 2550 flame pieces.
The video card decides where the flames move, not the CPU. Each flame rises up to a certain point before it disappears. Some rise much further than others before disappearing. In this screenshot, the flames form a cylinder of sorts, but I've also implemented doing the same thing with a cone.
And there's likely a negative performance hit to do this: the game will run faster with the flames than it would without them. The reason for this is that the fragment shader is very, very simple (take the color received from the interpolating at the rasterization stage and pass it along), so it carries virtually no performance hit, but it will block more distant objects so that more expensive fragment shaders sometimes don't have to run at all.
While I use triangular pieces, it would be pretty easy to have geometry shaders construct tetrahedra or cubes or whatever else you want instead.
First of all, i am more interesting in lasting effects. Like cutting down a tree on any random spot, and where you see the steep cuts, and could let it go. Or cut throw it, and where the tree actually falls and stays there. Or more generally lasting effects of all terraforming. It is possible with voxel. Is it possible in a big persistent world? Is it possible in a very high detail rate(small voxels)? Most probably not, or very limited. I will not agrue about that performance is a issue, becaue it is.
And i will not argue about that some mixed technics, like mesh, voxel transformation in one or the other way for whatever fits you best is most probably as of now a good way to avoid some performance issues. And there is not doubt, that you have a lot of other options for temporary effects, especially because you don't need that any objects breaks down on any random spot.. it is usually enough that it breaks down in one way.. which is a lot more easier and with less performance costs.
But nevertheless, i forsee that voxels, and all that buzz you can do with it, will be utilized a lot more in the future. As i already said Carmack ID6 engine will be based on voxels, and with the general move to more voxel 3D hardway may support more stuff more important for voxels to speed things up. But anyway.. it is more of a gutfeeling related with some indications like EQN, like Carmacks ID6, and whatever will come next. But the truth stands, you can do a few things a lot more easier and more elegant with voxels as it could be with meshes. The counter arguement is true as well.. some things are more easier and elegant do to with other methods..
Graphics are nice and dandy, but the biggest problem of most MMOs is gameplay and content.
Best comparison is not found in MMOs imho but in FPS. Borderlands sets an example here, that graphics are not the most cruicial factor for a game, but gameplay and content.
The key is to find and be the leader of the next generation.. something radically different and better. Maybe that's EQN, or maybe it something else, but everything points to voxel worlds and what they make possible, having ushered in the next generation of MMO's.
Is there any reason to use voxels other than not knowing what a manifold is? That's a serious question, not a rhetorical one.
And how do you destroy any object in thousand pieces, or even in any 2 pieces with manifolds. and i guess you talk about 2-manifold. I don't even see where those could either be replaces or are even in the same category. It is like you talking about vegetables and i am talking about fruits.
And will be voxel the future? I don't know a lot of people thought after Comanche (1992) voxels would be utilized a lot more.. it wasn't the case. Well now Carmack's new engine ID6 will be based on voxels. And i do see potential for voxel's as you can easier manipulate a 3D world based of voxels instead of meshes. But 3D cards have to follow that trend..
If you want an object to shatter, have pieces flying every which direction, and then disappear, that's pretty easy if you're using DirectX 10 or OpenGL 3.2 or later, even if you're trying to patch it on to current techniques. Pass a uniform with the time (in milliseconds) since the object has shattered so that pieces know how far to move, then in geometry shaders, pick a random orientation and speed to rotate and velocity to move for each triangle. Instead of passing through the vertex coordinates that enter the geometry shaders unchanged, move and rotate the whole triangle by whatever amount that triangle needs to be moved. You'll probably want to recompute the normal vector after moving the triangle, and use flat interpolation rather than smooth for the normal vector.
And if you're already extensively using tessellation DirectX 11 or OpenGL 4.0 or later, then it's even easier. Rather than having to take your base vertex data, you can decide how many triangles you want to shatter the object into, set the tessellation degree accordingly, and then proceed as above. There will be a bit of a performance hit GPU-side to shatter the object, but we might be in the range of a 20% hit to your frame rate if you wanted everything in the entire game world to shatter simultaneously. For just one or two things at a time, it's not a big deal. CPU-side, the extra computations to do this amount to a rounding error.
Now, if you want the shards of the exploding object to have some physics attached (e.g., land on the ground and then stop, bounce off of walls, or bounce off of each other), then you're probably going to have to do a ton of work CPU-side, and that's going to be a problem regardless of your rendering method. I could see how voxels would mitigate the problem some there, but it's going to take an awful lot of explosions (e.g., bullet hell quantities) for that to overcome the disadvantages of using voxels everywhere else. Even if this is a huge priority, it might well be easier to convert your model to voxels when it explodes and proceed from there, without using voxels for anything other than explosions.
Here's a quick screenshot of something I posted several months ago that is kind of related, but not exactly what you're talking about:
You can click on it for a larger view.
The total amount of vertex data for all of the flames there consists of a single float, 0.0f. If you want to have such particle effects of varying sizes, colors, and numbers of particles all over the game world, they can still all share exactly the same base vertex data and pass in the other few parameters as a uniform. And then the program is run exactly once to draw all of the flames in the picture. The CPU-side work amounts to rotating the whole stack of flames once, having to tell the GPU the system time of the start of the frame once per frame (to know how far to move each flame piece), and passing some uniforms to the GPU once again every frame. The CPU does that once, treating the flames as a single solid object, not separately for each of the 2550 flame pieces.
The video card decides where the flames move, not the CPU. Each flame rises up to a certain point before it disappears. Some rise much further than others before disappearing. In this screenshot, the flames form a cylinder of sorts, but I've also implemented doing the same thing with a cone.
And there's likely a negative performance hit to do this: the game will run faster with the flames than it would without them. The reason for this is that the fragment shader is very, very simple (take the color received from the interpolating at the rasterization stage and pass it along), so it carries virtually no performance hit, but it will block more distant objects so that more expensive fragment shaders sometimes don't have to run at all.
While I use triangular pieces, it would be pretty easy to have geometry shaders construct tetrahedra or cubes or whatever else you want instead.
First of all, i am more interesting in lasting effects. Like cutting down a tree on any random spot, and where you see the steep cuts, and could let it go. Or cut throw it, and where the tree actually falls and stays there. Or more generally lasting effects of all terraforming. It is possible with voxel. Is it possible in a big persistent world? Is it possible in a very high detail rate(small voxels)? Most probably not, or very limited. I will not agrue about that performance is a issue, becaue it is.
And i will not argue about that some mixed technics, like mesh, voxel transformation in one or the other way for whatever fits you best is most probably as of now a good way to avoid some performance issues. And there is not doubt, that you have a lot of other options for temporary effects, especially because you don't need that any objects breaks down on any random spot.. it is usually enough that it breaks down in one way.. which is a lot more easier and with less performance costs.
But nevertheless, i forsee that voxels, and all that buzz you can do with it, will be utilized a lot more in the future. As i already said Carmack ID6 engine will be based on voxels, and with the general move to more voxel 3D hardway may support more stuff more important for voxels to speed things up. But anyway.. it is more of a gutfeeling related with some indications like EQN, like Carmacks ID6, and whatever will come next. But the truth stands, you can do a few things a lot more easier and more elegant with voxels as it could be with meshes. The counter arguement is true as well.. some things are more easier and elegant do to with other methods..
If all you want is for a tree to have two states: chopped down or not chopped down, then that could have been done 10 years ago.
If you want a highly detailed world terraformed in fine detail by players, then are you thinking of a single-player game or an MMORPG? If the latter, then the real barrier is transmitting the game world data, not displaying it. A typical video card can do on the order of one trillion floating-point computations per second. A typical MMORPG only sends data over the Internet at a rate of about one thousand floats per second (one thousand 32-bit floats for 32 kbps).
In order to make that viable, what you're really looking for is procedurally-generated graphics, not voxels, as what's needed is the ability to draw massive amounts of stuff from small amounts of data. The way that MMORPGs traditionally get around this problem is that you download the entire game world before you play, and then rendering the game world as you play is done almost entirely client side, with only the need to transmit the location of players and mobs, not terrain.
Procedurally-generated graphics is completely independent of voxels, as you can easily have one without the other. They are both thinking outside the box, reinventing the wheel sorts of things. But I don't think that voxels are what you're really after in themselves. And if you want procedurally-generated graphics, then tessellation is a huge deal, as it basically amounts to procedurally-generated vertex data.
Let's not forget that the reason that rasterization became ubiquitous and voxels, ray-tracing, quadratic surfaces, and the various other alternatives did not is that rasterization is fast and the others weren't. That hasn't really changed, though if you wanted to base a game on quadratic surfaces, you could do so without much of a performance hit by using tessellation along with your rasterization. Of course, if you're using tessellation, you might as well draw a lot of other surfaces, too, not just quadratic ones.
Voxels have perhaps gotten easier due mainly to geometry shaders and to a lesser degree tessellation. But those can do so many other things and not just voxels, so I don't see it closing the gap in performance with traditional rasterization.
Graphics are nice and dandy, but the biggest problem of most MMOs is gameplay and content.
Best comparison is not found in MMOs imho but in FPS. Borderlands sets an example here, that graphics are not the most cruicial factor for a game, but gameplay and content.
Unless you want to go text-based like Dwarf Fortress, graphics are often a barrier to implementing larger quantities of content. If you could get the same quality graphics in half as much effort, then that frees up resources to create a lot more content.
Can we stop having excuses for EQ Next please? It is certainly true that content has to suffer the better graphically a game looks. The devs can't be doing two things at once, so better graphics equals less time for something else.
But the idea that we need cartoony graphics so that we can have animated models is just silly. And yes if a game does an amazing job on voice overs I imagine there would be less money for everything else.
We are getting nothing from having cartoony models apart from giving the devs more time and money to do something else. They then decided what do with that time, voice overs, animations, quests, mini games etc. Cartoon does not equal more animation.
The key is to find and be the leader of the next generation.. something radically different and better. Maybe that's EQN, or maybe it something else, but everything points to voxel worlds and what they make possible, having ushered in the next generation of MMO's.
Is there any reason to use voxels other than not knowing what a manifold is? That's a serious question, not a rhetorical one.
And how do you destroy any object in thousand pieces, or even in any 2 pieces with manifolds. and i guess you talk about 2-manifold. I don't even see where those could either be replaces or are even in the same category. It is like you talking about vegetables and i am talking about fruits.
And will be voxel the future? I don't know a lot of people thought after Comanche (1992) voxels would be utilized a lot more.. it wasn't the case. Well now Carmack's new engine ID6 will be based on voxels. And i do see potential for voxel's as you can easier manipulate a 3D world based of voxels instead of meshes. But 3D cards have to follow that trend..
If you want an object to shatter, have pieces flying every which direction, and then disappear, that's pretty easy if you're using DirectX 10 or OpenGL 3.2 or later, even if you're trying to patch it on to current techniques. Pass a uniform with the time (in milliseconds) since the object has shattered so that pieces know how far to move, then in geometry shaders, pick a random orientation and speed to rotate and velocity to move for each triangle. Instead of passing through the vertex coordinates that enter the geometry shaders unchanged, move and rotate the whole triangle by whatever amount that triangle needs to be moved. You'll probably want to recompute the normal vector after moving the triangle, and use flat interpolation rather than smooth for the normal vector.
And if you're already extensively using tessellation DirectX 11 or OpenGL 4.0 or later, then it's even easier. Rather than having to take your base vertex data, you can decide how many triangles you want to shatter the object into, set the tessellation degree accordingly, and then proceed as above. There will be a bit of a performance hit GPU-side to shatter the object, but we might be in the range of a 20% hit to your frame rate if you wanted everything in the entire game world to shatter simultaneously. For just one or two things at a time, it's not a big deal. CPU-side, the extra computations to do this amount to a rounding error.
Now, if you want the shards of the exploding object to have some physics attached (e.g., land on the ground and then stop, bounce off of walls, or bounce off of each other), then you're probably going to have to do a ton of work CPU-side, and that's going to be a problem regardless of your rendering method. I could see how voxels would mitigate the problem some there, but it's going to take an awful lot of explosions (e.g., bullet hell quantities) for that to overcome the disadvantages of using voxels everywhere else. Even if this is a huge priority, it might well be easier to convert your model to voxels when it explodes and proceed from there, without using voxels for anything other than explosions.
Here's a quick screenshot of something I posted several months ago that is kind of related, but not exactly what you're talking about:
You can click on it for a larger view.
The total amount of vertex data for all of the flames there consists of a single float, 0.0f. If you want to have such particle effects of varying sizes, colors, and numbers of particles all over the game world, they can still all share exactly the same base vertex data and pass in the other few parameters as a uniform. And then the program is run exactly once to draw all of the flames in the picture. The CPU-side work amounts to rotating the whole stack of flames once, having to tell the GPU the system time of the start of the frame once per frame (to know how far to move each flame piece), and passing some uniforms to the GPU once again every frame. The CPU does that once, treating the flames as a single solid object, not separately for each of the 2550 flame pieces.
The video card decides where the flames move, not the CPU. Each flame rises up to a certain point before it disappears. Some rise much further than others before disappearing. In this screenshot, the flames form a cylinder of sorts, but I've also implemented doing the same thing with a cone.
And there's likely a negative performance hit to do this: the game will run faster with the flames than it would without them. The reason for this is that the fragment shader is very, very simple (take the color received from the interpolating at the rasterization stage and pass it along), so it carries virtually no performance hit, but it will block more distant objects so that more expensive fragment shaders sometimes don't have to run at all.
While I use triangular pieces, it would be pretty easy to have geometry shaders construct tetrahedra or cubes or whatever else you want instead.
First of all, i am more interesting in lasting effects. Like cutting down a tree on any random spot, and where you see the steep cuts, and could let it go. Or cut throw it, and where the tree actually falls and stays there. Or more generally lasting effects of all terraforming. It is possible with voxel. Is it possible in a big persistent world? Is it possible in a very high detail rate(small voxels)? Most probably not, or very limited. I will not agrue about that performance is a issue, becaue it is.
And i will not argue about that some mixed technics, like mesh, voxel transformation in one or the other way for whatever fits you best is most probably as of now a good way to avoid some performance issues. And there is not doubt, that you have a lot of other options for temporary effects, especially because you don't need that any objects breaks down on any random spot.. it is usually enough that it breaks down in one way.. which is a lot more easier and with less performance costs.
But nevertheless, i forsee that voxels, and all that buzz you can do with it, will be utilized a lot more in the future. As i already said Carmack ID6 engine will be based on voxels, and with the general move to more voxel 3D hardway may support more stuff more important for voxels to speed things up. But anyway.. it is more of a gutfeeling related with some indications like EQN, like Carmacks ID6, and whatever will come next. But the truth stands, you can do a few things a lot more easier and more elegant with voxels as it could be with meshes. The counter arguement is true as well.. some things are more easier and elegant do to with other methods..
If all you want is for a tree to have two states: chopped down or not chopped down, then that could have been done 10 years ago.
If you want a highly detailed world terraformed in fine detail by players, then are you thinking of a single-player game or an MMORPG? If the latter, then the real barrier is transmitting the game world data, not displaying it. A typical video card can do on the order of one trillion floating-point computations per second. A typical MMORPG only sends data over the Internet at a rate of about one thousand floats per second (one thousand 32-bit floats for 32 kbps).
In order to make that viable, what you're really looking for is procedurally-generated graphics, not voxels, as what's needed is the ability to draw massive amounts of stuff from small amounts of data. The way that MMORPGs traditionally get around this problem is that you download the entire game world before you play, and then rendering the game world as you play is done almost entirely client side, with only the need to transmit the location of players and mobs, not terrain.
Procedurally-generated graphics is completely independent of voxels, as you can easily have one without the other. They are both thinking outside the box, reinventing the wheel sorts of things. But I don't think that voxels are what you're really after in themselves. And if you want procedurally-generated graphics, then tessellation is a huge deal, as it basically amounts to procedurally-generated vertex data.
Let's not forget that the reason that rasterization became ubiquitous and voxels, ray-tracing, quadratic surfaces, and the various other alternatives did not is that rasterization is fast and the others weren't. That hasn't really changed, though if you wanted to base a game on quadratic surfaces, you could do so without much of a performance hit by using tessellation along with your rasterization. Of course, if you're using tessellation, you might as well draw a lot of other surfaces, too, not just quadratic ones.
Voxels have perhaps gotten easier due mainly to geometry shaders and to a lesser degree tessellation. But those can do so many other things and not just voxels, so I don't see it closing the gap in performance with traditional rasterization.
interesting, i have assumed there had been some shift in either the hardware or the coding that had made both procedurally generated worlds and voxels more of a possibility in mmo creation.
"There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing play." Finite and Infinite Games, James Carse
interesting, i have assumed there had been some shift in either the hardware or the coding that had made both procedurally generated worlds and voxels more of a possibility in mmo creation.
As hardware has gotten faster and more versatile, some things that used to be computationally infeasible have become practical to implement. But practical doesn't necessarily mean sensible. If you think A and B look just as good, but A runs five times as fast as B, then you'll implement A. Ten years ago, that might have meant that you could make a working game based on A, while B simply wasn't playable. Today, maybe you can make B playable if you really want to, but if A is still five times as fast, it's still the one you're going to implement.
oh. and i have no idea what a manifold is except ive seen it mentioned in math and physics texts that i have tried to read on my own ... obviously with little success.
A manifold is just like a Euclidean space with additional structure (more precisely, each neigborhood in the manifold is like Euclidean space.
Euclidean space simply (intuitively, not the precise definition) a set of points where a distance measure is defined by the usual geometrical assumptions (i.e. points have 1-1 correspondence to the set of R^N, R = set of real numbers, and the distance between two points are defined by the real numbers assigned to the points, i.e. its coordinate)
A manifold can have addition structure. For example, a 1-D Euclidean space is a line (infinite) but a manifold can be a circle.
despite my inability to crunch numbers i find the conceptual side of this pretty interesting.
is there by chance a manifold engine?
"There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing play." Finite and Infinite Games, James Carse
Fluid and natural character and NPC animation is CRUCIAL for an MMORPG in my opinion.
WoW has nailed it, and even people who are inclined towards more realistic graphics can easily overlook cartoonish aesthetics and have fun in WoW because everything plays so damn smooth.
Some of the animations shown in EQN reveal appear quite smooth, but for me to like them they need to be more natural, ESPECIALLY in jumping animation (which is so terrible in for example Age of Conan it makes me don't want to play)
But if I have to choose between The Division and EQN - super-realistic graphics with perfectly smooth and natural animation and cartoonish game with perfect animation? No contest, The Division wins.
despite my inability to crunch numbers i find the conceptual side of this pretty interesting.
is there by chance a manifold engine?
What I think is the geometrically intuitive way to use tessellation is this:
1) Start by picking a manifold with boundary that you want to draw. It should not have sharp corners, and needs to be something for which you can readily specify a normal bundle. The normal bundle is really the trickiest part. Or maybe I'm just not very good at vector bundles. (Or both.)
2) Pick a triangulation of your manifold with boundary. For technical reasons, it is desirable that every facet of the triangulation should have an interior vertex, but that the triangulation should otherwise have few vertices. This triangulation is a simplicial manifold with boundary and is your base vertex data.
3) Come up with a reasonable formula to compute a tessellation degree at each vertex. This will depend on the size and curvature of the object, its distance from the camera, and how spread out your vertices are. Doing this computation is the bulk of the work that you do in vertex shaders.
4) In tessellation control shaders (equivalently, hull shaders if you're using DirectX), set the outer tessellation degrees for each patch to something that depends symmetrically on only the two endpoints, and set the inner tessellation degrees to anything reasonable. The former ensures that you can connect several surfaces later along common edges without having ugly gaps.
5) In tessellation evaluation shaders (equivalently, domain shaders if you're using DirectX), give an explicit homeomorphism between your base vertex data and the manifold with boundary that you actually wanted to draw. Also give the normal bundle, and compute whatever else you're going to need in fragment shaders.
6) Proceed with geometry shaders and fragment shaders (equivalently, pixel shaders if you're using DirectX) just as you would if you weren't using tessellation in the first place.
7) Repeat for any other relatively simple shapes that you want to draw. You'll need separate vertex and tessellation evaluation shaders for most of the different types of shapes that you want to draw, which greatly bloats the shader count as compared to older methods. For example, I've read that the Hero Engine has 60 shaders in total. A project that I'm working on has more than that for the OpenGL 4.2 version alone, even not counting the OpenGL 3.2 version for backwards compatibility.
8) If you want to draw more complex objects (not a manifold, sharp corners, too weird to do nicely in a single pass, etc.), then break it down into multiple smaller shapes that you can draw nicely and draw each of them.
At the moment, I'm working on animating characters. Due to a combination of extensive use of tessellation (which means vertex data takes little space), procedurally generated animations (which means that animating a character doesn't take much more space than if the character stood completely still), and procedurally generated textures (the really big one, which means that textures take virtually no space at all), I expect the full data for an animated character to take on the order of a few KB.
That's little enough that you could give players incredible flexibility in designing their characters, and then stream them to other players on the fly. Traditional character creators give you a handful of pre-created options and maybe some sliders and let you choose among them. I want to give players vastly more flexibility than that. How many legs do you want your character to have, and where do you want them placed? How many arms? Wings? Tail(s)? Antennae? Horns, or anything else you can think up? Do you want to run around on four legs but stand up on two when fighting? Or maybe you'd rather roll around on wheels. Or slither. Or fly (but have to stay near the ground for play balance reasons).
And then if a bunch of players create thousands of wildly different species for their own characters, why not appropriate that to have thousands of different species of NPCs and/or mobs in the game world? Sometimes quantity has a quality all its own.
Procedurally generated textures are really the hard part, and there may or may not be more than a handful of people who could do them decently.
The geometry side of things is easier. A lot of mathematicians could do the geometry side of things as well as I could or better. People with a weaker math background, however, such as a BS in computer science, would be completely lost in trying to implement what I described above. That's why I think it's strange that the game industry has shown no real interest in hiring the mathematicians they'd need to do modern graphics, but prefers to stick to the way things were done 6-8 years ago, apart from higher resolution textures and higher vertex counts.
"There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing play." Finite and Infinite Games, James Carse
despite my inability to crunch numbers i find the conceptual side of this pretty interesting.
is there by chance a manifold engine?
What I think is the geometrically intuitive way to use tessellation is this:
1) Start by picking a manifold with boundary that you want to draw. It should not have sharp corners, and needs to be something for which you can readily specify a normal bundle. The normal bundle is really the trickiest part. Or maybe I'm just not very good at vector bundles. (Or both.)
2) Pick a triangulation of your manifold with boundary. For technical reasons, it is desirable that every facet of the triangulation should have an interior vertex, but that the triangulation should otherwise have few vertices. This triangulation is a simplicial manifold with boundary and is your base vertex data.
3) Come up with a reasonable formula to compute a tessellation degree at each vertex. This will depend on the size and curvature of the object, its distance from the camera, and how spread out your vertices are. Doing this computation is the bulk of the work that you do in vertex shaders.
4) In tessellation control shaders (equivalently, hull shaders if you're using DirectX), set the outer tessellation degrees for each patch to something that depends symmetrically on only the two endpoints, and set the inner tessellation degrees to anything reasonable. The former ensures that you can connect several surfaces later along common edges without having ugly gaps.
5) In tessellation evaluation shaders (equivalently, domain shaders if you're using DirectX), give an explicit homeomorphism between your base vertex data and the manifold with boundary that you actually wanted to draw. Also give the normal bundle, and compute whatever else you're going to need in fragment shaders.
6) Proceed with geometry shaders and fragment shaders (equivalently, pixel shaders if you're using DirectX) just as you would if you weren't using tessellation in the first place.
7) Repeat for any other relatively simple shapes that you want to draw. You'll need separate vertex and tessellation evaluation shaders for most of the different types of shapes that you want to draw, which greatly bloats the shader count as compared to older methods. For example, I've read that the Hero Engine has 60 shaders in total. A project that I'm working on has more than that for the OpenGL 4.2 version alone, even not counting the OpenGL 3.2 version for backwards compatibility.
8) If you want to draw more complex objects (not a manifold, sharp corners, too weird to do nicely in a single pass, etc.), then break it down into multiple smaller shapes that you can draw nicely and draw each of them.
that will take me a while to digest, but i think i get the gist of it.
At the moment, I'm working on animating characters. Due to a combination of extensive use of tessellation (which means vertex data takes little space), procedurally generated animations (which means that animating a character doesn't take much more space than if the character stood completely still), and procedurally generated textures (the really big one, which means that textures take virtually no space at all), I expect the full data for an animated character to take on the order of a few KB.
goodgahd. that's insanely small from what i've seen. and extremely fascinating, especially how the size of textures play out.
That's little enough that you could give players incredible flexibility in designing their characters, and then stream them to other players on the fly. Traditional character creators give you a handful of pre-created options and maybe some sliders and let you choose among them. I want to give players vastly more flexibility than that. How many legs do you want your character to have, and where do you want them placed? How many arms? Wings? Tail(s)? Antennae? Horns, or anything else you can think up? Do you want to run around on four legs but stand up on two when fighting? Or maybe you'd rather roll around on wheels. Or slither. Or fly (but have to stay near the ground for play balance reasons).
And then if a bunch of players create thousands of wildly different species for their own characters, why not appropriate that to have thousands of different species of NPCs and/or mobs in the game world? Sometimes quantity has a quality all its own.
Procedurally generated textures are really the hard part, and there may or may not be more than a handful of people who could do them decently.
The geometry side of things is easier. A lot of mathematicians could do the geometry side of things as well as I could or better. People with a weaker math background, however, such as a BS in computer science, would be completely lost in trying to implement what I described above. That's why I think it's strange that the game industry has shown no real interest in hiring the mathematicians they'd need to do modern graphics, but prefers to stick to the way things were done 6-8 years ago, apart from higher resolution textures and higher vertex counts.
that's an interesting question, but i would imagine it has more to do with old boy's networks and the familiarity of dealing with what we're capable of understanding and what we're used to handling on a day to day basis. of course, in the case of EQN, when they needed someone well versed enough in the mathematics they turned to the creator of Voxelfarms. also, its only been in the last 5 years or so that i've noticed game companies occasionally getting consultants in Sociology and Psychology. and those are still 'soft sciences' in many respects.
"There are at least two kinds of games. One could be called finite, the other infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing play." Finite and Infinite Games, James Carse
I prefer the realistic style over cartoonish but gameplay comes 1st. Just make a good game and everything else falls into place.
Here is a concept and I got it from loyally playing Mabinogi since it's inception: successful games make a copy version in the other graphic style. I and others were begging on the forums of Nexon for years that they remake Mabinogi with an adultish looking version to draw in those who like that graphic art style. They almost caved but at the last minute changed their minds and gave us Vindictus as the consolation prize.
Still, I don't see why good games can't try to release two graphic versions of themselves?
Comments
Geometry shaders (introduced in DirectX 10 and then OpenGL 3.2) make a huge difference, as they let you do a ton of computations GPU-side.
I gather you didn't like my answer the last time you asked this question?
( er ... despite how that reads, that's not really aimed at you ... just my inner voice commenting to myself after reading your comment )
First of all, i am more interesting in lasting effects. Like cutting down a tree on any random spot, and where you see the steep cuts, and could let it go. Or cut throw it, and where the tree actually falls and stays there. Or more generally lasting effects of all terraforming. It is possible with voxel. Is it possible in a big persistent world? Is it possible in a very high detail rate(small voxels)? Most probably not, or very limited. I will not agrue about that performance is a issue, becaue it is.
And i will not argue about that some mixed technics, like mesh, voxel transformation in one or the other way for whatever fits you best is most probably as of now a good way to avoid some performance issues. And there is not doubt, that you have a lot of other options for temporary effects, especially because you don't need that any objects breaks down on any random spot.. it is usually enough that it breaks down in one way.. which is a lot more easier and with less performance costs.
But nevertheless, i forsee that voxels, and all that buzz you can do with it, will be utilized a lot more in the future. As i already said Carmack ID6 engine will be based on voxels, and with the general move to more voxel 3D hardway may support more stuff more important for voxels to speed things up. But anyway.. it is more of a gutfeeling related with some indications like EQN, like Carmacks ID6, and whatever will come next. But the truth stands, you can do a few things a lot more easier and more elegant with voxels as it could be with meshes. The counter arguement is true as well.. some things are more easier and elegant do to with other methods..
Graphics are nice and dandy, but the biggest problem of most MMOs is gameplay and content.
Best comparison is not found in MMOs imho but in FPS. Borderlands sets an example here, that graphics are not the most cruicial factor for a game, but gameplay and content.
If all you want is for a tree to have two states: chopped down or not chopped down, then that could have been done 10 years ago.
If you want a highly detailed world terraformed in fine detail by players, then are you thinking of a single-player game or an MMORPG? If the latter, then the real barrier is transmitting the game world data, not displaying it. A typical video card can do on the order of one trillion floating-point computations per second. A typical MMORPG only sends data over the Internet at a rate of about one thousand floats per second (one thousand 32-bit floats for 32 kbps).
In order to make that viable, what you're really looking for is procedurally-generated graphics, not voxels, as what's needed is the ability to draw massive amounts of stuff from small amounts of data. The way that MMORPGs traditionally get around this problem is that you download the entire game world before you play, and then rendering the game world as you play is done almost entirely client side, with only the need to transmit the location of players and mobs, not terrain.
Procedurally-generated graphics is completely independent of voxels, as you can easily have one without the other. They are both thinking outside the box, reinventing the wheel sorts of things. But I don't think that voxels are what you're really after in themselves. And if you want procedurally-generated graphics, then tessellation is a huge deal, as it basically amounts to procedurally-generated vertex data.
Let's not forget that the reason that rasterization became ubiquitous and voxels, ray-tracing, quadratic surfaces, and the various other alternatives did not is that rasterization is fast and the others weren't. That hasn't really changed, though if you wanted to base a game on quadratic surfaces, you could do so without much of a performance hit by using tessellation along with your rasterization. Of course, if you're using tessellation, you might as well draw a lot of other surfaces, too, not just quadratic ones.
Voxels have perhaps gotten easier due mainly to geometry shaders and to a lesser degree tessellation. But those can do so many other things and not just voxels, so I don't see it closing the gap in performance with traditional rasterization.
Unless you want to go text-based like Dwarf Fortress, graphics are often a barrier to implementing larger quantities of content. If you could get the same quality graphics in half as much effort, then that frees up resources to create a lot more content.
Can we stop having excuses for EQ Next please? It is certainly true that content has to suffer the better graphically a game looks. The devs can't be doing two things at once, so better graphics equals less time for something else.
But the idea that we need cartoony graphics so that we can have animated models is just silly. And yes if a game does an amazing job on voice overs I imagine there would be less money for everything else.
We are getting nothing from having cartoony models apart from giving the devs more time and money to do something else. They then decided what do with that time, voice overs, animations, quests, mini games etc. Cartoon does not equal more animation.
interesting, i have assumed there had been some shift in either the hardware or the coding that had made both procedurally generated worlds and voxels more of a possibility in mmo creation.
"There are at least two kinds of games.
One could be called finite, the other infinite.
A finite game is played for the purpose of winning,
an infinite game for the purpose of continuing play."
Finite and Infinite Games, James Carse
As hardware has gotten faster and more versatile, some things that used to be computationally infeasible have become practical to implement. But practical doesn't necessarily mean sensible. If you think A and B look just as good, but A runs five times as fast as B, then you'll implement A. Ten years ago, that might have meant that you could make a working game based on A, while B simply wasn't playable. Today, maybe you can make B playable if you really want to, but if A is still five times as fast, it's still the one you're going to implement.
A manifold is just like a Euclidean space with additional structure (more precisely, each neigborhood in the manifold is like Euclidean space.
Euclidean space simply (intuitively, not the precise definition) a set of points where a distance measure is defined by the usual geometrical assumptions (i.e. points have 1-1 correspondence to the set of R^N, R = set of real numbers, and the distance between two points are defined by the real numbers assigned to the points, i.e. its coordinate)
A manifold can have addition structure. For example, a 1-D Euclidean space is a line (infinite) but a manifold can be a circle.
despite my inability to crunch numbers i find the conceptual side of this pretty interesting.
is there by chance a manifold engine?
"There are at least two kinds of games.
One could be called finite, the other infinite.
A finite game is played for the purpose of winning,
an infinite game for the purpose of continuing play."
Finite and Infinite Games, James Carse
Fluid and natural character and NPC animation is CRUCIAL for an MMORPG in my opinion.
WoW has nailed it, and even people who are inclined towards more realistic graphics can easily overlook cartoonish aesthetics and have fun in WoW because everything plays so damn smooth.
Some of the animations shown in EQN reveal appear quite smooth, but for me to like them they need to be more natural, ESPECIALLY in jumping animation (which is so terrible in for example Age of Conan it makes me don't want to play)
But if I have to choose between The Division and EQN - super-realistic graphics with perfectly smooth and natural animation and cartoonish game with perfect animation? No contest, The Division wins.
NEW IDEAS that can refresh the STALE state of MMORPGs
What I think is the geometrically intuitive way to use tessellation is this:
1) Start by picking a manifold with boundary that you want to draw. It should not have sharp corners, and needs to be something for which you can readily specify a normal bundle. The normal bundle is really the trickiest part. Or maybe I'm just not very good at vector bundles. (Or both.)
2) Pick a triangulation of your manifold with boundary. For technical reasons, it is desirable that every facet of the triangulation should have an interior vertex, but that the triangulation should otherwise have few vertices. This triangulation is a simplicial manifold with boundary and is your base vertex data.
3) Come up with a reasonable formula to compute a tessellation degree at each vertex. This will depend on the size and curvature of the object, its distance from the camera, and how spread out your vertices are. Doing this computation is the bulk of the work that you do in vertex shaders.
4) In tessellation control shaders (equivalently, hull shaders if you're using DirectX), set the outer tessellation degrees for each patch to something that depends symmetrically on only the two endpoints, and set the inner tessellation degrees to anything reasonable. The former ensures that you can connect several surfaces later along common edges without having ugly gaps.
5) In tessellation evaluation shaders (equivalently, domain shaders if you're using DirectX), give an explicit homeomorphism between your base vertex data and the manifold with boundary that you actually wanted to draw. Also give the normal bundle, and compute whatever else you're going to need in fragment shaders.
6) Proceed with geometry shaders and fragment shaders (equivalently, pixel shaders if you're using DirectX) just as you would if you weren't using tessellation in the first place.
7) Repeat for any other relatively simple shapes that you want to draw. You'll need separate vertex and tessellation evaluation shaders for most of the different types of shapes that you want to draw, which greatly bloats the shader count as compared to older methods. For example, I've read that the Hero Engine has 60 shaders in total. A project that I'm working on has more than that for the OpenGL 4.2 version alone, even not counting the OpenGL 3.2 version for backwards compatibility.
8) If you want to draw more complex objects (not a manifold, sharp corners, too weird to do nicely in a single pass, etc.), then break it down into multiple smaller shapes that you can draw nicely and draw each of them.
At the moment, I'm working on animating characters. Due to a combination of extensive use of tessellation (which means vertex data takes little space), procedurally generated animations (which means that animating a character doesn't take much more space than if the character stood completely still), and procedurally generated textures (the really big one, which means that textures take virtually no space at all), I expect the full data for an animated character to take on the order of a few KB.
That's little enough that you could give players incredible flexibility in designing their characters, and then stream them to other players on the fly. Traditional character creators give you a handful of pre-created options and maybe some sliders and let you choose among them. I want to give players vastly more flexibility than that. How many legs do you want your character to have, and where do you want them placed? How many arms? Wings? Tail(s)? Antennae? Horns, or anything else you can think up? Do you want to run around on four legs but stand up on two when fighting? Or maybe you'd rather roll around on wheels. Or slither. Or fly (but have to stay near the ground for play balance reasons).
And then if a bunch of players create thousands of wildly different species for their own characters, why not appropriate that to have thousands of different species of NPCs and/or mobs in the game world? Sometimes quantity has a quality all its own.
Procedurally generated textures are really the hard part, and there may or may not be more than a handful of people who could do them decently.
The geometry side of things is easier. A lot of mathematicians could do the geometry side of things as well as I could or better. People with a weaker math background, however, such as a BS in computer science, would be completely lost in trying to implement what I described above. That's why I think it's strange that the game industry has shown no real interest in hiring the mathematicians they'd need to do modern graphics, but prefers to stick to the way things were done 6-8 years ago, apart from higher resolution textures and higher vertex counts.
what, if any, is the relation between Simplex generation and Manifold generation.
i was reading this: http://procworld.blogspot.com/2013/07/simplex-toys-are-best.html at the time.
"There are at least two kinds of games.
One could be called finite, the other infinite.
A finite game is played for the purpose of winning,
an infinite game for the purpose of continuing play."
Finite and Infinite Games, James Carse
that will take me a while to digest, but i think i get the gist of it.
goodgahd. that's insanely small from what i've seen. and extremely fascinating, especially how the size of textures play out.
that's an interesting question, but i would imagine it has more to do with old boy's networks and the familiarity of dealing with what we're capable of understanding and what we're used to handling on a day to day basis. of course, in the case of EQN, when they needed someone well versed enough in the mathematics they turned to the creator of Voxelfarms. also, its only been in the last 5 years or so that i've noticed game companies occasionally getting consultants in Sociology and Psychology. and those are still 'soft sciences' in many respects.
"There are at least two kinds of games.
One could be called finite, the other infinite.
A finite game is played for the purpose of winning,
an infinite game for the purpose of continuing play."
Finite and Infinite Games, James Carse
I prefer the realistic style over cartoonish but gameplay comes 1st. Just make a good game and everything else falls into place.
Here is a concept and I got it from loyally playing Mabinogi since it's inception: successful games make a copy version in the other graphic style. I and others were begging on the forums of Nexon for years that they remake Mabinogi with an adultish looking version to draw in those who like that graphic art style. They almost caved but at the last minute changed their minds and gave us Vindictus as the consolation prize.
Still, I don't see why good games can't try to release two graphic versions of themselves?