How to Render ShapeNet Objects?

Given .obj (& .mtl) files, how to render images from different camera angles? With Blender Python API.

Jimmy Yeh
4 min readApr 5, 2021

The ShapeNet dataset

As documented on their website, ShapeNet is an ongoing collaborative effort between researchers at Princeton, Stanford, and TTIC to establish a richly annotated, large-scale dataset of 3D shapes. Currently, ShapeNetCore is a subset of ShapeNet containing single clean 3D models with manually verified category and alignment annotations are released.

In ShapeNetCore (v2), the directory structure is roughly:
> taxonomy.json: listing the synsetId(s) and the English name(s) of the model-type, as well as the combined number of model instances.

> [synsetId]: synset noun offset for the model type in WordNet v3.0 (v3.1 is available online) as an eight-digit zero-padded string, e.g. 02828884 for “bench”. (This is also used in ImageNet)

>> [fullId]: the unique id of the model

>>> [models]: containing the files (.obj, .mtl)

-02691156 (airplane)
-fff513f407e00e85a9ced22d91ad7027
-models
-model_normalized.obj
-model_normalized.mtl
-
-
-
(-screenshots)
-fff513f407e00e85a9ced22d91ad7027-0.jpg ...
(-images)
-texture0.jpg
-ff7c22a964a5a54e3bb4b8f3758b3c41
.
.
.
-02843684 (house)
.
.
.

The part that we will use is model_normalized.obj and model_normalized.mtl (which may also refer to ../images/texture0.jpg)

1. CPU

Blender Python API

Blender is a free and open-source 3D creation suite, which also provides a python API to be executed with python script. (The following heavily borrows stanford-shapenet-renderer).

Platform to run the script

There are 2 ways to execute Blender Python script: from a built-in console and text-editor within the Blender app (see this workshop video); or from a computer terminal.
For macOS, to call blender from the console, one can add the directory of the blender executable to PATH following the instructions.

The minimum working example script

The goal is to edit render_blender.py into a python function that allows specification of the light source and the camera angle.

## initialize blender
context = bpy.context
scene = bpy.context.scene
render = bpy.context.scene.render
render.engine = 'BLENDER_EEVEE' # eg CYCLES, BLENDER_EEVEE
render.image_settings.color_mode = 'RGB' # ('RGB', 'RGBA', ...)
render.image_settings.color_depth = '8' # ('8' for 0-256, '16')
render.image_settings.file_format = 'JPEG' # ('PNG', 'OPEN_EXR', 'JPEG, ...)
render.resolution_x = 1024
render.resolution_y = 1024
render.resolution_percentage = 100
render.film_transparent = True
# delete default cube
context.active_object.select_set(True)
bpy.ops.object.delete()

After setting render options and deleting the default cube, we can import our shapenet object (as well as ground, sky,… for realistic rendering)

# shapenet object
bpy.ops.import_scene.obj(filepath=objpath)
obj = bpy.context.selected_objects[0]
context.view_layer.objects.active = obj
# add a ground
bpy.ops.import_scene.obj(filepath=args.groudpath)
ground = bpy.context.selected_objects[0]
# add a sky
bpy.ops.import_scene.obj(filepath=args.skypath)
sky = bpy.context.selected_objects[0]

Optional mesh cleaning:

# remove double and edge_split:
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.remove_doubles()
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.modifier_add(type='EDGE_SPLIT')
context.object.modifiers["EdgeSplit"].split_angle = 1.32645
bpy.ops.object.modifier_apply(modifier="EdgeSplit")

Add light source: one with shade and one on opposite without shade:

light = bpy.data.lights['Light']
light.type = 'SUN'
light.use_shadow = True
light.specular_factor = 1.0
light.energy = 10.0
# Add another light source so stuff facing away from light is not completely dark
bpy.ops.object.light_add(type='SUN')
light2 = bpy.data.lights['Sun']
light2.use_shadow = False
light2.specular_factor = 1.0
light2.energy = 0.015

Set camera:

cam = scene.objects['Camera']
cam.location = (0, 1, 0.6)
cam.data.lens = 35
cam.data.sensor_width = 32
cam_constraint = cam.constraints.new(type='TRACK_TO')
cam_constraint.track_axis = 'TRACK_NEGATIVE_Z'
cam_constraint.up_axis = 'UP_Y'
cam_empty = bpy.data.objects.new("Empty", None)
cam_empty.location = (0, 0, 0)
cam.parent = cam_empty
scene.collection.objects.link(cam_empty)
context.view_layer.objects.active = cam_empty
cam_constraint.target = cam_empty

Finally, we can render the image:

scene.render.filepath = <filepath> #(without '.jpg' at the end)
bpy.ops.render.render(write_still=True)

To add depth map and id map, add following script before deleting default cube

# add scene nodes to render depth map, id map
scene.use_nodes = True
scene.view_layers["View Layer"].use_pass_object_index = True
nodes = bpy.context.scene.node_tree.nodes
links = bpy.context.scene.node_tree.links
# Clear default nodes
for n in nodes:
nodes.remove(n)
# Create input render layer node
render_layers = nodes.new('CompositorNodeRLayers')
# Create depth output nodes
depth_file_output = nodes.new(type="CompositorNodeOutputFile")
depth_file_output.label = 'Depth Output'
depth_file_output.base_path = ''
depth_file_output.file_slots[0].use_node_format = True
depth_file_output.format.color_depth = '8'
depth_file_output.format.file_format = 'JPEG' ### how to get accurate value ????
depth_file_output.format.color_mode = "BW"
# Remap as other types can not represent the full range of depth.
map = nodes.new(type="CompositorNodeMapValue")
# Size is chosen kind of arbitrarily, try out until you're satisfied with resulting depth map.
map.offset = [-0.7]
map.size = [args.depth_scale]
map.use_min = True
map.min = [0]
links.new(render_layers.outputs['Depth'], map.inputs[0])
links.new(map.outputs[0], depth_file_output.inputs[0])
# Create id map output nodes
id_file_output = nodes.new(type="CompositorNodeOutputFile")
id_file_output.label = 'ID Output'
id_file_output.base_path = ''
id_file_output.file_slots[0].use_node_format = True
id_file_output.format.file_format = args.format
id_file_output.format.color_depth = args.color_depth
id_file_output.format.color_mode = 'BW'
divide_node = nodes.new(type='CompositorNodeMath')
divide_node.operation = 'DIVIDE'
divide_node.use_clamp = False
divide_node.inputs[1].default_value = 2**int(args.color_depth)
links.new(render_layers.outputs['IndexOB'], divide_node.inputs[0])
links.new(divide_node.outputs[0], id_file_output.inputs[0])

And define the output node file paths before “bpy.ops.render”

depth_file_output.file_slots[0].path = scene.render.filepath + "_depth"
id_file_output.file_slots[0].path = scene.render.filepath + "_id"

The final results

Execution (from computer console)

Single instance rendering

blender --background --python <the script>.py -- <argparse keywords:  --optional_keyword value [positonal argument value]>

e.g. the stanford-shapenet-renderer runs with

blender --background --python render_blender.py -- --output_folder /tmp path_to_model.obj

install OpenEXR python library for mac:
https://github.com/google-research/kubric/issues/19

Parallel rendering (CPU)

Parallel rendering (GPU)

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Jimmy Yeh
Jimmy Yeh

Written by Jimmy Yeh

no time to write will finish latter

No responses yet

Write a response