Both expensive and a time saver, but also takes specialized technicians to be able to handle the raw data usefully. It's an entirely new trade in itself. This is borderline becoming normal production nowadays for the big companies and soon enough to all indie developers. Can you imagine what will come in 10 years? 20 years? From a gaming standpoint and a studio standpoint this industry is constantly being challenged and demanded to come up with new technological solutions.
It really depends on the application. For movies this is fine, but when you are talking games there are several technical issues. All 3D scans need to be cleaned up. The scan tool itself has some depth artifacts. Its also impossible to pose a model symmetrically. As a modeler you have 2 choices on what to do to clean it up. The first is you can simply use the scan data as reference for a new model which is symmetrical. The second is you can overlay a model over the scan data and bake it out. With this method you also have to use an individual rig per model. The lighting condition technique is actually pretty old. Its something I was doing in school where you take lights from different angles and you can get a lot of surface information this way. Most shading accuracy has to do with available technology. You can get a life like image if you can wait 20 minutes per frame. But we aren't there at a real time rate. As for the texture. The pictures and scans are pretty much reference. Most of the work is being done with a painter like substance that gets accurate skin maps quickly and efficiently. Simply taking a photo won't give you enough information of subsurface coloring that can be extracted in a short period of time.
Comments
All 3D scans need to be cleaned up. The scan tool itself has some depth artifacts. Its also impossible to pose a model symmetrically. As a modeler you have 2 choices on what to do to clean it up. The first is you can simply use the scan data as reference for a new model which is symmetrical. The second is you can overlay a model over the scan data and bake it out. With this method you also have to use an individual rig per model.
The lighting condition technique is actually pretty old. Its something I was doing in school where you take lights from different angles and you can get a lot of surface information this way.
Most shading accuracy has to do with available technology. You can get a life like image if you can wait 20 minutes per frame. But we aren't there at a real time rate. As for the texture. The pictures and scans are pretty much reference. Most of the work is being done with a painter like substance that gets accurate skin maps quickly and efficiently. Simply taking a photo won't give you enough information of subsurface coloring that can be extracted in a short period of time.
edit: google is aware of the problem. It was, and is sometimes still, being re-directed. It's being fixed.
edit 2: works now, but slowly.