Maps from Real Life

Started by blueking, June 06, 2014, 04:26:04 AM

Previous topic - Next topic

blueking

I have access to a Matterport camera, a unique device that uses RGB and IR sensors to create 3D models of real-world spaces (www.matterport.com). This could be an incredible tool for the mapmaking community, in that it would allow nearly anyone regardless of skill or experience to create maps of places they've been to, as well as significantly reduce the amount of time experienced mapmakers need to build their maps by letting them use scans of real places as starting points.

The only problem here is that the models it creates are .obj's. I have very little experience making maps, but I know that .obj's are a no-go in terms of maps for Source. I attempted to convert a model that the camera had created into a .vmf through the Wall Worm VMF Exporter (please don't laugh!) but it just locked my computer up (I let it run for about 12 hours before killing it).

I know people have asked before if there's an easy way to convert .obj's into .vmf's (or even .bsp's) and that the answer is usually "No, stop being so lazy, just remake it out of brushes in Hammer/WWMT" but I think this is a bit of a special circumstance. It's not like you can just reconfigure the camera to make brushes instead of .obj's. So, my question is this: is there a way to convert .obj's (which are not necessarily completely convex) into Source maps with minimal effort?

tl;dr: Camera turns your world into .obj's. Can you turn .obj's of spaces into maps?

Examples of models created by this camera are available upon request.

Joris Ceoen

Hello blueking,

Because Shawn is currently unavailable (gone for a few days) I will try to solve this 'problem' with you. The ' ' are because it's not really a problem in your situation but a misunderstanding of how models and level design work in Source.

The model extensions of Source SDK are primarly using .mdl which being made with a program called studiomdl.exe. It also generates a bunch of other files (unimportant to us but imporant for the engine) that all together make the model work in a Source map. .obj is indeed a file that Hammer cannot read, so trying to put a .obj in a fodler and searching it in Hammer won't get you anywhere. That's because the .obj hasn't been converted (in Hammer terms compiled) to a .mdl first.

This is essentially the problem you have. However judging from your post, it seems that you are scanning an entire (real-life) area and are trying to make a level out of it. I should start by saying that this approach of level design is wrong in many instances, so let me start at the basics:


  • Hammer (the level design editor) makes use of brushes, displacements and models to determine the environment in which players will move around. However those 3 things are different as opposed to each other! Brushes are made out of blocks or forms/meshes that have to be made manually in Hammer or if using Wall Worm Model Tools level designing capabilities, in 3DS Max. Those are defining when and where the world is ending and are also used for optimizing, determining what should be rendered or not. Brushes are essential in Source level design and because of the aforementioned reason cannot be ignored or you will have a terribly unoptimized map.
  • Displacements allow for more complex and organic ground surfaces and are different as opposed to brushes in that they do not decide any optimization at all.
  • The problem with the previous 2 is that they are engine related stuff and unique to Hammer. They have many limitations that came with Source SDK and as such any complex operations will crash your map in a hearthbeat. That's where models come in, which can have any possible form or operation as long as the polygoncount is at least under 10K (preferably. The real limit is hard to pin down but anything above 10k is not recommended, and 10k is really the maximum IMO for a single model)

I explained all of this first to make you understand that scanning an entire area and just outputting it as a level is plain simply impossible for Source SDK to do. Even more, as far as my knowledge goes, such machines generate millions of polygons which is a no-go as well. On top of that, the textures it may generate could be immense resolutions that Source would never be able (and never should) to handle. To finish, those resolutions may all be incorrect as Source requests a power of 2 resolution (which means 256x256, 512x512, 1024x1024, 256x1024 or any possible combination between those or higher numbers of a power of 2). 600x578 will not work, 512x413 will not work. Only power of 2.

What I suggest is to use your camera to scan objects that could be models, make sure to optimize them so they have a low polycount instead (you can do this with ProOptimizer or derivates) and use those instead in a Level that you handcraft yourself in Hammer or even better, Wall Worm Model Tools. Since you have very little experience, I suggest you start learning the very basics of level design and level design in Source SDK. the Source SDK Docs are a good place to start, watching pleny of tutorials will quickly get you on the run and then if you're more experienced with level design in Source SDK you'll be able to understand how the models work and textures alike.

A good start to read into for level design in 3DS Max to Source is the following: http://www.wallworm.com/projects/utilities/Hammered-to-the-Max.pdf

If you have any further questions, feel free to ask. Shawn may enlight more on the current situation when he gets back but that's what I can advice you for the moment.

wallworm

Joris' answer sums it up for the most part.

I too have interests in this kind of technology and workflow, but it just isn't practical at this point in time. You really aren't going to be able to make a playable level from these scanned environments. It's just not going to happen :(

However, that doesn't mean there isn't a utility for it! The closest thing I have to this is Autodesk Image Modeler. It's pretty cool and I have ideas to incorporate this into some future projects. Image Modeler, however, produces low-poly results (based on the tech video I watched of Matterport)...

If I had this camera, I'd certainly use it for making some cool environments for Source. Making Models, as Joris mentioned, to fill up an environment, is one immediate and practical utility. You could make a bunch of realistic props that would otherwise take a lot of time to create. Then again, the amount of time you'd spend cleaning them up to be lower-poly could make it less efficient as it seems at first glance. Only trying it out and testing will answer that.

The other utility is to create 3D environments to further edit in 3ds Max that you then render into a 2d Sky Texture. So imagine going to a cool location, scanning it into 3D, then adding some extra mood/lighting/theme/flavor to render it as the 2D sky. This way you create the scaffold (real environment) that you then build into the background image of your level.

Hopefully this gives you food for thought.

blueking

Hi Joris,

Thanks very much for your detailed response, I really appreciate you taking the time to explain the basics of Source and why .obj's are ill-suited to become maps.

My goal for this is not to create a map based off someplace I have been, but to create an easily repeatable process that many can follow to create maps based off places they scan without necessarily having much experience with either Hammer or 3ds Max. The device is primarily being marketed for real estate and home remodeling purposes, but I'd like to prove that it has utility in the game sector as well by making mapmaking more accessible to the masses. I think that this could really generate a lot of interest in mapmaking from people who have never really considered it before.

The camera is not suited to scan individual objects. It is tripod-mounted and spins in 360 degrees to capture its surroundings, and it is only accurate to within an inch of a feature's actual dimension. While it generates far more polygons than is necessary for a map (the model I'm testing with has about 17k), they are split up such that objects that you might want to use as models have relatively few and appear jagged and unrealistic. It is also capable of stitching together multiple captures (each time it spins and captures its surroundings) together into a single model, which is what makes it so good at making models of large spaces.

The textures it generates are all 2048x2048, which are pretty big compared to the numbers you mentioned, but at least they are powers of 2.
The ideal solution that I'm imagining for this is a tool that is able to break a model down into its most basic surfaces (floors, walls, and ceilings) and turn those into brushes, while applying the textures properly. Is there anything like that in existence?

Shawn,

Thanks for taking the time to look at the technology, and thanks for all your work on Wall Worm. I haven't had the chance to learn all the intricacies of it but it seems like a really cool tool.

From what I've seen of Image Modeler, I'm not quite sure it's a good fit here. While the camera is cable of making 360 panoramas, its real strength is that it also collects 3D data, so I don't think that discarding all of that would result in a better end product.

Your sky texture idea sounds pretty neat, although I must admit I'm not familiar with how those work. Unfortunately, the camera does not work well outdoors. It uses an IR projector to measure depth, so either sunlight interferes with the dot pattern or it's too dark to get decent RGB data. Would it still have any utility for this purpose?

Again, thank you both for taking interest in the matter(port (sorry)) and taking the time to write thoughtful responses to a newb such as myself. What you do is super-awesome and I'd love to get more people interested in it.

CarbonCopyCat

Though the device itself wouldn't be able to make maps in Source, I think it could be useful as a sort of reference model for the actual map. I've done something similar with a Kinect and scanning software, so there shouldn't really be a problem with the model, but textures may be a problem. Just an idea though, no clue about the practicality of doing so.  :P

wallworm

Quote from: blueking on June 13, 2014, 02:24:35 AM
...I really appreciate you taking the time to explain the basics of Source and why .obj's are ill-suited to become maps.
Just to clarify, this isn't necessarily true. That an asset is derived from a .OBJ file is not relevant. An OBJ file can be converted simply into a level as proper brushes. It's not so much the file format that is the problem. The problem is that the geometry itself must be able to be broken up into elements where each element is Convex, has no Coplanar faces and sealed. Furthermore, it is very important for world geometry to have most of its vertices aligned absolutely accurately to the world grid.

And displacements also require a very specific set of vertex ordering/count as well as size limit.

And, the brushes must be such that every polygon (side) must have planar UV mapping.

For these reasons, it is technically unfeasible. I think that the technology would have to be updated with some math/imaging voodoo with Source specifically in mind to make it worth the effort (for playable levels). Because of the kind of cleanup you would do, it would probably take longer to use for world geometry/displacements than making them from scratch.

Models, however...


Quote from: blueking on June 13, 2014, 02:24:35 AM
The camera is not suited to scan individual objects. It is tripod-mounted and spins in 360 degrees to capture its surroundings, and it is only accurate to within an inch of a feature's actual dimension. While it generates far more polygons than is necessary for a map (the model I'm testing with has about 17k), they are split up such that objects that you might want to use as models have relatively few and appear jagged and unrealistic. It is also capable of stitching together multiple captures (each time it spins and captures its surroundings) together into a single model, which is what makes it so good at making models of large spaces.

This is unfortunate, because capturing individual objects probably has the most immediate utility... but...

Quote from: CarbonCopyCat on June 13, 2014, 07:21:49 AM
Though the device itself wouldn't be able to make maps in Source, I think it could be useful as a sort of reference model for the actual map. I've done something similar with a Kinect and scanning software, so there shouldn't really be a problem with the model, but textures may be a problem.

That's a good idea.

If the textures need re-worked, there is no real problem. In Max you can easily resize a texture by using Render Map in the material Editor. Or in a model itself make a new UV channel and scale up the UVS... and RTT into that channel.

With this in mind, there is a great process:

1) Import Scene.

2) Break out Prop Objects from scene and edit as necessary.

3) Make a new UV channel for models you want to use.

4) RTT the original texture into new UV channel.

5) Using new (appropriate UV size) and new Texture as base layer in new Composite Map in Max, retexture using various normal Max texturing techniques (material editor, viewport canvas, etc).

Joris Ceoen

As Shawn says, even if it is not directly suited for single models, you can just scan the room with the items in them, have them as high poly/as highly detailed as possible and only then use Render-To-Texture. It would require you to do low poly models, but then again that's ONLY the low poly whereas if you start from scratch you also need to do the high-poly as well which in this case is being done by the camera.

SMF spam blocked by CleanTalk