Problems extracting texture coords from hkx file

Problems extracting texture coords from hkx file

Portrait de jchmack

I am trying to use havok animation outside the demo framework. I have been trying to load a mesh from an hkx file. I can get the position of vertices just fine:

	hkxVertexDescription* vdesc = VertexBuffer->getVertexDesc();
	hkUint32 usageMask = vdesc->getMask();

	if (	(usageMask & hkxVertexDescription::HKX_DU_POSITION) &&
			(usageMask & hkxVertexDescription::HKX_DU_TEXCOORD)	)
	{
		hkInt8* base = (hkInt8*)VertexBuffer->getVertexData();
		hkUint32 stride = vdesc->m_stride;

		int PosOffset = vdesc->getElementByteOffset(hkxVertexDescription::HKX_DU_POSITION, 0);
		int TexOffset = vdesc->getElementByteOffset(hkxVertexDescription::HKX_DU_TEXCOORD, 0);
		int numVerts = VertexBuffer->getNumVertices();

		for (hkInt32 i=0; i< numVerts; i++)
		{
			float* pos = reinterpret_cast(base + (stride * i) + PosOffset);
			float* tex = reinterpret_cast(base + (stride * i) + TexOffset);

			SimpleVertex NewVertex;
			NewVertex.Position.x					= pos[0];
			NewVertex.Position.y					= pos[1];
			NewVertex.Position.z					= pos[2];

			NewVertex.TextureCoordinate.x			= tex[0];
			NewVertex.TextureCoordinate.y			= tex[1];
			//NewVertex.TextureCoordinate.z			= tex[2];

			//cout << NewVertex.TextureCoordinate.x << ' ' << NewVertex.TextureCoordinate.y << endl;


			m_Vertices.Add(NewVertex);

			//hkVector4 position( pos[0], pos[1], pos[2] );
			//HK_DISPLAY_STAR(position, 5.0f, color);
		}
	}

When I render the positions of the vertices seeem fine but the texcoords are way off. I displayed the values to the console and I definitely get coords ouside of the [0,1] range.

What exactly am i doing wrong?

edit: I went ahead and loaded the normals from the hkx file using the same method as the positions. I believe that for positions and normals i can use floats but not for the texcoords.

from hkxVertexDescription.h:
// in order, 0,1,2, etc of the texture channels. Assumed to be 2D, [u,v], in most cases

What should I cast them to?

14 posts / 0 nouveau(x)
Dernière contribution
Reportez-vous à notre Notice d'optimisation pour plus d'informations sur les choix et l'optimisation des performances dans les produits logiciels Intel.
Portrait de PatrickAtHavok

Hey there Jchmack

Actually, for positions, normals and texcoords you can use any of the types declared in the enum DataType code segment in hkxVertexDescription.h (around line 27-37). You have to read the data types from vdesc using vdesc->getElementType(), much the same way you use vdesc->getElementByteOffset(). That way, you can reinterpret_cast all of the data correctly. It seems you got lucky with your positions and normals and made a good guess for their type, but that doesn't work in general.

What does vdesc->getElementType(hkxVertexDescription::HKX_DU_TEXCOORD, 0) return? My guess would be that if they're not floats, they could be HKX_DT_INT16s, which you would then need to divide by 3276.7f to convert to float tcoords, as per the comment in hkxVertexDescription.h.

Patrick Developer Support Engineer Havok www.havok.com
Portrait de jchmack
Quoting - PatrickAtHavok Hey there Jchmack

Actually, for positions, normals and texcoords you can use any of the types declared in the enum DataType code segment in hkxVertexDescription.h (around line 27-37). You have to read the data types from vdesc using vdesc->getElementType(), much the same way you use vdesc->getElementByteOffset(). That way, you can reinterpret_cast all of the data correctly. It seems you got lucky with your positions and normals and made a good guess for their type, but that doesn't work in general.

What does vdesc->getElementType(hkxVertexDescription::HKX_DU_TEXCOORD, 0) return? My guess would be that if they're not floats, they could be HKX_DT_INT16s, which you would then need to divide by 3276.7f to convert to float tcoords, as per the comment in hkxVertexDescription.h.

Thank you for the quick reply (especially at 4am)
I tried to implement what you suggested:

hkxVertexDescription::DataType TexDataType = vdesc->getElementType(hkxVertexDescription::HKX_DU_TEXCOORD, 0);


if (TexDataType == hkxVertexDescription::HKX_DT_INT16)
{
				hkInt16* tex	= reinterpret_cast(base + (stride * i) + TexOffset);
				NewVertex.TextureCoordinate.x			= (tex[0]/3276.7f);
				NewVertex.TextureCoordinate.y			= (tex[1]/3276.7f);

}

And i tested thatvdesc->getElementType(hkxVertexDescription::HKX_DU_TEXCOORD, 0); will return hkxVertexDescription::HKX_DT_INT16. The coordinates are better now (all between 0 and 1). But still do not look correct when rendered...

It seems like the mesh(havokgirl) coordinates are mixed up or something. her hands are mapped to her shoulders and many other issues. I am figuring that it could be a rendering issue but I have a pretty straightforward shader:

return txDiffuse.Sample( samLinear, input.Tex );

Any other ideas on what else could be wrong?

Portrait de PatrickAtHavok
Best Reply

Hey again Jchmack,

Don't worry about me posting at 4am, I'm at the Havok main office in Dublin, Ireland (GMT+0) - that post was noon for me, which puts you on the west coast of the US by my calculation...

One thing that might be going wrong with your texture mapping is that different renderers use different conventions as to which corner of a texture is (0,0) and which is (1,1). You could try all these permutations of how that could work:

NewVertex.TextureCoordinate.x = (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = 1 - (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = 1 - (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = 1 - (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = 1 - (tex[1]/3276.7f);

If it's just a problem with the coordinate convention of the renderer, one of those should give you the right answer. If that doesn't work, we'll have to think of something else...

Patrick Developer Support Engineer Havok www.havok.com
Portrait de jchmack
Quoting - PatrickAtHavok Hey again Jchmack,

Don't worry about me posting at 4am, I'm at the Havok main office in Dublin, Ireland (GMT+0) - that post was noon for me, which puts you on the west coast of the US by my calculation...

One thing that might be going wrong with your texture mapping is that different renderers use different conventions as to which corner of a texture is (0,0) and which is (1,1). You could try all these permutations of how that could work:

NewVertex.TextureCoordinate.x = (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = 1 - (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = 1 - (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = 1 - (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = (tex[1]/3276.7f);
NewVertex.TextureCoordinate.x = (tex[0]/3276.7f);
NewVertex.TextureCoordinate.y = 1 - (tex[1]/3276.7f);

If it's just a problem with the coordinate convention of the renderer, one of those should give you the right answer. If that doesn't work, we'll have to think of something else...

Thank you very much I got the textures loaded properly with the last of the permutations you gave me.
I am actually in central USA but I believe that the intel site time is on the west coast.

Now I am trying to load the correct materials. But I bet I can do this much on my own.
Any advice on how to retrieve the texture map used from hkVariant in the material?

hkxMaterial* mMaterial = MeshSection->m_material;
hkxMaterial::TextureStage* mStages = mMaterial->m_stages;
hkVariant m_texture = mStages->m_texture;

Thanks again Patrick.

Portrait de PatrickAtHavok

Hey again Jchmack,

The texture maps are either stored in a file or some already-loaded image data. Each uses a different texture class. Thus, you have to first find out from which class the texture maps are being loaded, then cast to that class. The hkVariant provides everything you need to do this.

Something like this code here:

if( hkString::strCmp( texture->m_class->getName(), hkxTextureFileClass.getName() ) == 0 )
{
    hkxTextureFile* textureFile = (hkxTextureFile*)( texture.m_object );
    ...
}
else if( hkString::strCmp( texture->m_class->getName(), hkxTextureInplaceClass.getName() ) == 0 )
{
    kxTextureInplace* textureInplace = (kxTextureInplace*)( texture.m_object );
    ...
}

One small note on your naming convention: normally, we use the "m_" prefix only for member variables, so we can tell which those are at a glance - so we'd call it "textures" instead of "m_textures".

Patrick Developer Support Engineer Havok www.havok.com
Portrait de danothom

This is obviously a really old post, but as it applies to my current issue, instead of creating a new post I thought I would just revive this one.

Basically I've been having similar issues extracting the TexCoords from havok files. The TexCoords were all over the place.
Using the above advice ( casting the texCoords into HkInt16 and then dividing by 3276.7 ) I've managed to massage the TexCoords back to normalised values between [0,1].

This is great and all but I was wondering if someone can give me some background as to why the HavokGirl mesh file has 2 .PNG texture files - one for the head and one for the body. This is causing me alot of grief.

If I load two seperate texture files for this one model mesh then how am I meant to distinguish between the two files and implement the rendering?

I've tried by only mapping the shader 2D sampler to the head PNG file for the time being, however even though my indexed mesh of the HavokGirl vertices renders correctly, the colouring and texture of the entire rendered model is a stripy red colour.

Especially in the way the mesh file is broken down by AC-->MeshBinding(s)-->Mesh-->MeshSection(s)-->VertexBuffer + IndexBuffer(s). Just not sure how I'm meant to interate through the above structure and then use the two seperate texture files!

Can any of the guru's offer any insight?

Portrait de HavokTim

Hi danothom--

Characters in video games often consists of multiple submeshes, and in particular frequently have a separate submesh for the head. One reason is to allow sharing of assets - heads can be swapped out and the body shared between many different character instances. Another reason is that the head may need to be affected by multiple systems, such as a facial animation subsystem. Third, and most importantly, it's common to use different materials for various subparts of a character. Because each material requires a separate draw call, a single object is broken up into several sub-meshes based on material type. Although most of the Havok demos do not require these features, Havok is frequently integrated into triple-A engines that do. The demos include assets like this to provide a complete example for users that need to leverage Havok's full feature set.

In terms of iterating through these resources, each mesh contains a meshSection, and each meshSection has a unique material, a single vertex buffer (with a vertexDescription), and an array of index buffers which correspond to draw primitives (strips, fans, etc.).

Let me know whether this fully answers your questions. Thanks!

--Tim

Portrait de danothom

Hi Tim,

Thanks for the prompt response. I've already managed to extract all of the vertexes, normals, materials, texcoords etc from the model. Like I mentioned above, I'm able to render the mesh a static drawElements call using the vertex/index data from the model.

What Im struggling to get my head around is how to determine when I need to point the texture sampler in the shaders to the Head PNG image, and when to point it to the Body PNG image.

I've stepped through each Meshbinding -->Mesh-->MeshSection and I can isolate the mesh that represents the head. My problem is that I'm not exactly sure how to then tell the engine that its iterating through the head mesh vertex data so it needs to use the head PNG image sampler when using the texcoords etc.

I have been trawling through the havok examples and demos but cannot seem to find anything useful to reverse engineer.

Is there some kind of naming flag hidden in the exported Havok mesh data?

Out of curiousity - at a high level how would you implement that capability in the code i.e. how to let OpenGL/DirectDraw know that it is now using the head mesh section?

Portrait de HavokTim

Hi danothom--

MeshSections do not maintain name information, but they do have material information that you can examine to determine which textures should be associated with each section. When you process a MeshSection, reference the member variable m_material of type hkxMaterial. Each material has an array m_stages containing TextureStage objects. You will want to iterate through these stages, using the m_usageHint field to determine the intended usage of each texture (diffuse map, specular, etc). Each TextureStage object has a field m_texture that points to its corresponding texture file, but it's a typeless handle, so you'll need to do something like this to get the texture's path:


	hkStringBuf fullFilename;

	if( material->m_stages[j].m_texture.getClass() == &hkxTextureFileClass )

	{

		hkxTextureFile* sourceTexture = static_cast(material->m_stages[j].m_texture.val());

		fullFilename = sourceTexture->m_originalFilename;

	}

	else if( material->m_stages[j].m_texture.getClass() == &hkxTextureInplaceClass )

	{

		hkxTextureInplace* sourceTexture = static_cast(material->m_stages[j].m_texture.val());

		fullFilename = sourceTexture->m_originalFilename;

	}

Thus, if you construct your materials alongside your meshes, you should be able to associate textures with each sub-mesh appropriately.

I'm not sure what you mean by letting the 3D API know that you are using a given mesh section. Typically, you make a series of calls to change shaders, samplers, and so forth, then issue a draw call for a submesh, then repeat. The time at which you begin rendering a particular submesh is therefore an implicit rather than explicit operation. There are debug calls you can make to group calls hierarchically, but these are not required (and should be avoided in final code due to performance overhead).

Please let me know whether this fully answers your questions. Thanks!
--Tim

Portrait de danothom

Hi Tim,

Apologies for the delayed response.

I've been working through your provided examples and advice and I think I'm getting my head around the concept which I need to employ here - but please correct me if I'm wrong.

1.) I iterate through each MeshBinding--> Mesh--> MeshSection and process/consume/capture all of the relevant index + vertex buffer data (position, normal, texcoords)
2.) Also process/capture the relevant Materials data for each MeshSection by iterating through each hkxMaterial--> m_stages.
2.1.) If the TextureStage's m_texture object is a hkxTextureInplaceClass object then the texture data is self contained in the havok model. Create a new texture/image object and load in the m_texture data using m_data etc.
2.2) If the TextureStage's m_texture object is a hkxTextureFileClass object then the texture data needs to be loaded in from an associated file, usually a PNG. Use m_filename to grab the file location and load in the texture data using the preferred image loader etc.
3.) Change shaders/samplers accordingly so that they sample from each MeshSections texture object using the vertex's TexCoords.
(At this stage I've essentially created the VBOs and am ready to draw the element array.)
4.) Make the draw calls while iterating through each MeshSection of the MeshBinding--> Mesh--> MeshSection of my own Mesh object model.

Does that look about right? I'm not worried about lighting schemes/data yet until I get this nailed.

Note : Still a bit fuzzy about point 2.2 as I'm sure I'd be loading the same file multiple times for each MeshSection etc - however all of the HavokGirl_lowres model uses hkxTextureInplaceClass so this isnt an issue yet.

Also, I'll worry about the Rig and bone loading and the associated transforms + blendweights/indices after this junction. Even though I'm pretty sure just drawing the static mesh is going to cause some projection transform artifacts as mentioned in this older post :
http://software.intel.com/en-us/forums/topic/289423

Thanks

Dan

Portrait de HavokTim

Hi Dan--

That looks like the right way to process the scene data. Let me know if you run into any specific troubles and I'll try to help you through them.

With respect to loading files multiple times, you are right that this could become an issue. A common way of handling this is with an AssetManager that contains a map from FileKey to Asset*. FileKeys can, for example, be a hash of the (normalized) path string, and Asset can be the base class of all classes loaded from a source file. Then, when you need an asset, you make the request from the AssetManager, which returns the existing reference if possible, or loads, registers, and returns a new asset otherwise.

Best regards,
--Tim

Portrait de danothom

Hi,

Picking this back up again after an extended xmas hiatus.

I've been messing with reading and capturing the material in each MeshSection and then sampling that in GLSL.

I'm reading the data from the "hkLowResSkin_L4101.hkx" havok model file. The havok hkxTextureInplace object is found in each Material object in each MeshSection within the model. The havokGirl model file only seems to use hkxTextureInplace so I basically grab the m_data raw data from each TextureStage in each m_material object.

1.) The issue I'm having is to do with OpenGL's glTexImage2D function. Basically I need to load the texture data into OpenGL's buffers, however I require a few basic parameters for this function to work. Namely the width and height of the texture image, as well as the data texture format i.e. RGBA8, RGB etc.

EG:


glTexImage2D(GL_TEXTURE_2D, 0, _TEXTURE_FORMAT_, _WIDTH_, _HEIGHT_, 0, _TEXTURE_FORMAT_, GL_UNSIGNED_BYTE, &MeshSection->Material->m_data;

Normally I could grab that from a physically loaded texture file using an image loader which would then expose the required parameters and other useful info.
My question is how do I get these required parameters from the hkxTextureInplace objects?

2.) Also, any info about the hkxTextureInplace/hkxTextureFile objects? I can't find anything substantial on the web or in the havok documentation. Is the m_data object in hkxTextureInplace still compressed? And does it have to be parsed/manipulated first before I use the image data?

I'm pretty new to most of this stuff so perhaps I've missed something.

Anyway, any info or help would be much appreciated.

Thanks,

Dan

Portrait de HavokTim

Hi Dan--

An hkxTextureInplace is a file buffer, not a proper texture. Its member field m_fileType contains a string with the file's extension (BMP, PNG, GIF, etc.), and m_data is a buffer containing loaded file information. Below, I've included example code showing how you might create an hkgTexture* from an hkxTextureInplace. The members available from an hkgTexture should be more familiar to you, and If you have your own internal texture format, you can either convert the hkgTexture* to your custom format, or parse the buffer directly into your texture.


hkxTextureInplace* t;

hkgDisplayContext* context;

hkgTexture* tt = hkgTexture::create(context);
if (t->m_data.getSize() > 0)

{

		hkIstream is(t->m_data.begin(), t->m_data.getSize());

		if (is.isOk())

		{

			switch (t->m_fileType[0])

			{

			// png

			case 'P':

			case 'p':

				tt->loadFromPNG(is);

				break;

			// bmp

			case 'B':

			case 'b':

				tt->loadFromBMP(is);

				break;

			// tga

			case 'T':

			case 't':

				tt->loadFromTGA(is);

				break;

			// dds

			case 'D':

			case 'd':

				tt->loadFromDDS(is);

				break;

			default :

				HK_WARN_ALWAYS(0xabba63fd, "HKG: Unsupported texture file type : " << t->m_fileType[0] << t->m_fileType[1] << t->m_fileType[2]);

				tt->removeReference();

			}

		}

		else

		{

			HK_WARN_ALWAYS(0xabba63fc, "HKG: Unsupported texture file type : " << t->m_fileType[0] << t->m_fileType[1] << t->m_fileType[2]);

			tt->removeReference();

			return -1;

		}

}

Let me know if you have any questions or problems getting this to work. Thanks!

--Tim

Connectez-vous pour laisser un commentaire.