VRM output for VRoid cluster, what should I do after all?

Currently, VRoid Studio has a function to output VRM compatible with cluster, but on the other hand, it seems that there are also evaluations such as "image quality is dirty" or "cluster restrictions are too strict". Since it is a domestic virtual SNS that can be accessed on Macs and smartphones, I would like to solve the problem in an easy way.

Output to cluster is supported as standard

The current version of VRoid Studio supports output to the cluster by default. Select from Export on the Shoot / Export tab.

Strictly speaking, this is not the only limit, but you need to clear the following limits to upload to the cluster.

スクリーンショット 2021-01-08 7.42.08.png

Let's resize the material integration function by outputting at 4096px rather than outputting at 2048px

VRoid Studio behaves a bit ridiculously when the atlasing function tries to output at 2048px, so it is recommended to output at 4096px and then resize as much as possible. (What do you mean when the scale of reduction is not 50%?)

スクリーンショット 2021-01-08 7.46.08.png

For some reason, the strange behavior of multiple copies of hair material is happening in both cases, but I don't know the cause.

If you output at 4096px, you cannot upload it to the cluster as it is, but if you use software around here, you can resize it to 2048px, so it is recommended to output at 4096px and resize as much as possible.

--VRM Texture Replacement & Optimization Tool #cluster #VRM
https://booth.pm/ja/items/2601784

However, the full resolution of 2048px can be used with cluster only for event organizers, and even if the pixels are crushed a little, it will be a level that can be done without enlarging the details, so use the output of VRoid as it is. I don't care about the extreme theory. The problem is 512px forced shrink for the general public, which causes further degradation.

Will VReducer be the savior of general cluster users?

The maximum resolution of VRMs that can be used by general users, mobile displays, or the world is effectively limited to 512px, so we have to do something about it. With the VRoid Atlas assignment, when resized to 512px, the resolution of the skin or dress will be 256x256px. This would be too small. But don't you think that if you can use 512x512px for a whole costume, it will be a good deal?

That's why, let's go in the direction of resizing the resolution of dresses etc. with the highest priority. VReducer is a customizable software written in python, so I think it has a high degree of flexibility, such as tuning the costume resolution in the direction of gaining as much as possible.

Please refer to this article because Mr. Tomine also summarizes how to install VReducer.

Installing VReducer on Mac

Since there are many Windows users, if you follow Mac, please read the command pip in the text as pip3 and python as python3. Please install Homebrew if necessary. Rosetta 2 seems to work well on M1 machines. Open the terminal

% arch -x86_64 zsh

If you type, it will work assuming that it is an Intel machine.

Export settings when using VReducer

This time I will export from VRoid with the setting not to integrate the materials, but this time I will stop the integration of the materials and adjust the number of polygons and the number of bones so that they are within the limit and export the VRM.

VReducer constraints

Material information such as normal map and rim light (sphere) will be blown away. But for now, that's not a big deal.

――In the first place, does it mean a normal map with low resolution? ――It's popular like "textures that don't look like VRoid", and most people hate rim lights, right?

There are some people who don't hurt even if you erase it like that, and if you are not satisfied with it, please get along well with Blender and Unity.

Modification of VReducer

Let's modify VReducer immediately. Atlasing will combine costumes and skin textures with other textures, so all you have to do is remove the atlasing process. When I searched for the corresponding process, I found something like that around line 620 of vrm/reducer.py. As you might expect, this was the actual process of atlasing the texture.

   #Combine materials
    print('combine materials...')

# (Omission)
        #Clothes combination
        if cloth_place := get_cloth_place(gltf):
            gltf = combine_material(gltf, cloth_place['place'], cloth_place['main'], texture_size)

        #Body, face, mouth
        gltf = combine_material(gltf, {
            '_Face_': {'pos': (0, 0), 'size': (512, 512)},
            '_FaceMouth_': {'pos': (512, 0), 'size': (512, 512)},
            '_Body_': {'pos': (0, 512), 'size': (2048, 1536)}
        }, '_Face_', texture_size)
        #Change render type
        face_mat = find_vrm_material(gltf, '_Face_')
        face_mat['keywordMap']['_ALPHATEST_ON'] = True
        face_mat['tagMap']["RenderType"] = 'TransparentCutout'

        #Eyeliner, eyelashes
        gltf = combine_material(gltf, {
            find_eye_extra_name(gltf): {'pos': (0, 0), 'size': (1024, 512)},
            '_FaceEyeline_': {'pos': (0, 512), 'size': (1024, 512)},
            '_FaceEyelash_': {'pos': (0, 1024), 'size': (1024, 512)}
        }, '_FaceEyeline_', texture_size)

        #Pupil, highlights, white eyes
        gltf = combine_material(gltf, {
            '_EyeIris_': {'pos': (0, 0), 'size': (1024, 512)},
            '_EyeHighlight_': {'pos': (0, 512), 'size': (1024, 512)},
            '_EyeWhite_': {'pos': (0, 1024), 'size': (1024, 512)}
        }, '_EyeHighlight_', texture_size)

        #Hair, under the head
        hair_back_material = find_vrm_material(gltf, '_HairBack_')
        if hair_back_material:
            hair_resize = {'_HairBack_': {'pos': (512, 0), 'size': (1024, 1024)}}
            hair_material = find_near_vrm_material(gltf, '_Hair_', hair_back_material)
            if hair_material:
                hair_resize[hair_material['name']] = {'pos': (0, 0), 'size': (512, 1024)}
                gltf = combine_material(gltf, hair_resize, hair_material['name'], texture_size)

The absolute truth of resolution> material texture

Even if the face parts are reduced to some extent, there is plenty of room, so it is better to promote atlasing and increase the resolution of the body / costume. That's why, let's rewrite the face parts so that the body and costume (above) are not atlased while proceeding with the integration. That is this branch. Also, I have included support for VRoid 0.12.1. https://github.com/yakumo-proj/VReducer/tree/benefit-512px

Well, before and after use. The face is not so different, but the one below the neck is totally different. The rim light isn't working, but you think resolution is overwhelmingly more important than that?

aaaaaa.png

Well, in fact, not only did I use VReducer (revised), but I also changed the base hair to the base hair of Hair Sample, and modified the normals to match the blend shape of the costume (the number of polygons was subtly changed when exporting) You can do it just by reducing it).

How to make it look better?

There is a limit to what you can do with a resolution of 512px, but is there anything you can do with the data limit of the cluster? Let's take a look at the output of VReducer above.

(Omission)
vrm materials: 7
materials: 7
textures: 10
images: 10
meshes: 3
primitives: 18
	 Face(Clone).baked : 9
	 Body.baked : 6
	 Hair001.baked : 3

The number of materials is 7. There is only one material left, so I can't say that I have enough power, but it may be possible to layer one on top of each other (check with Unity's Rashomon tool). Since you can have up to 16 textures, you may be able to revive the normal map and give the costume a three-dimensional appearance and texture. I'm wondering if 512px looks good enough.

Please try various things. Also, please let me know if you could easily convert to a super beautiful VRM with Blender or Unity.

If I upload an avatar based on 512px, the resolution is too low for the event, what should I do?

Please make one for the event (2048px) and one for the world (512px) and upload both. It is a mechanism of avatar change for that.

Recommended Posts

VRM output for VRoid cluster, what should I do after all?
What should I do with the Python directory structure after all?
After all, what should I use to do type comparisons in Python?
Data analysis, what do you do after all?
[For beginners] What to do after installing Anaconda
What should I do with DICOM in MPEG2?
What I thought after working on the "No comment at all" project for a year
After all, how much should I write a Qiita article?
[Python beginner] How do I develop and execute Python after all?
MacBookPro Setup After all I want to do a clean installation
What automation should I do in RPA, VBA, and programming languages?
After all, what is statistical modeling?