Every Render-Engine for Blender

Every Render-Engine for Blender

Updated 31th October 2021

What is a Render-Engine?

A Render-Engine is an application, which takes the 3D models and all the information like light and textures and converts them into a 2D image.
Here I explained some terms you might not know. Which will appear in this article.

Most modern Render-Engines and also the most on this list are Ray tracing Render-Engines. They work by simulating the light beams, which become emitted by a light source bouncing around the 3d space into the Camera.

A fork is if a developer takes a preexisting software from and continues it another way.

rendered with logo of appleseed

Appleseed is the youngest Render-Engine project in the lineup. It is also an Open source Ray tracing Engine, but with the focus on VFX, which means there are fewer features for stylized renderings, but things like acoustics, Motion Blur, and Bokeh are wonderful compared to other Render-Engines. The only problem which most people will have is performance.

CGI dog following stick

Cycles is not only the default Render-Engine of Blender, it is also my favorite engine. It is by far the most versatile engine on the list. It is also one of the quicker ones. Especially with the new Denoiser.

One thing I am noticing currently while I am switching to Octane render for the Media Production Company in which I am currently working is, that Cycles has a lot of different Nodes which allows, to create extremely complex materials like in these examples at #Nodevember.

I have the feeling that Cycles allows you to work more freely and creatively because of the many options.
The only problems with Cycles are caustics, volumetric, and more complex scenes.

E-Cycles was the first Fork of Cycles with the goal to make it faster without a sacrifice in quality. According to the website, it is up 100% faster than normal Cycles on Nvidia GPUs. But because of the high price of $299 for the current version that supports RTX GPUs, I wasn’t willing to pay that much for a fork of an existing software. Which is the reason I can’t say much about it.

Eevee

CGI flying car

Eevee is a result of the collaboration with Blender and Epic Games. It’s a real-time Engine based on the Unreal Engine. The reason to use Eevee clearly is performance. Not having to render for a long time will also enable those, who don’t have access to modern Hardware. I am thinking about Nollywood (No, this isn’t a Mistake I am talking about the Nigerian film industry which most people don’t know about, but it is in fact the second largest in the world.  ) there. Maybe Eevee will enable many studios from there to create animated movies.

But it is also the Render-Engine, that delivers the worst quality. And it is also limited especially when it comes to things like glass or volume.

The Indigo Render Engine is the commercial counterpart of the Lux core render Engine. It is a Ray tracing Engine with a focus on photorealism and physical accuracy. Which makes it interesting for architecture rendering. But isn’t really popular in combination with Blender, which makes it almost impossible to find documentation or tutorials and test about Indigo Renderer for Blender.

K-Cycles is the latest big Fork of Cycles. It also has the goal of making Cycles faster. I am currently testing it, so that I can validate the information myself, but so far it looks promising. I really like the simple user interface. Which has basically one option which means there isn’t anything to learn.

Because of the performance and the compared to E-Cycles low price of $49 I think it is interesting because it doesn’t require learning new things and Graphics cards aren’t really available currently.

glass brick with caustics

The LuxCoreRender Project is the successor of the Lux Render Project. An Open Source Ray trace Render-Engine with the Focus on realism and physical accuracy. A big difference to other Render-Engines, is, that LuxCoreRender is excellent at creating caustics. Which means if you have a scene with a lot of transmission materials like glass, vodka or any fluids it can make a huge difference in realism.

The downside of course is, that it is a bit slower in most cases. Or in those without Caustics. But since it is also compatible with Intel’s  OpenImage Denoiser this isn’t a problem anymore.

The LuxCoreRneder comes with two different algorithms.

Path Tracing

Which is the same technique as in most other Render-Engines and is able to render with the CPU and multiple GPUs at the same time. Here you can also make use out of NVIDIA’s OptiX technology if you are the proud owner of an RTX Graphics card. The big difference to other Render-Engines is, the Light Tracing feature which if activated enables you to create good Caustics.

Bidirectional

To explain the difference here is really complicated, but if you want to know exactly what it does you can watch this video here. What you need to know is, that it is the most physically accurate rendering algorithm that is currently available for Blender.

Here is a Comparison between Cycles and the two Algorithms of the LuxCoreRender. One thing that is clearly noticeable is the caustics on the blue wall and how the Bidirectional Path-Tracer is able, to capture way more light which is the reason, it is brighter. In the Image, that is rendered with the bidirectional Path Tracer you can even see caustics from the pink wall on the blue wall if you look closely.

astronaut on parking lot cgi

Octane render was the first GPU Ray tracing Engine and was the fastest production renderer back then. This is the reason, it is the most popular Render-Engine outside the Blender universe. When it comes to performance it is comparable with Cycles. But Octane delivers more realism and is much better than it comes to things like volume and Subsurface scattering.

It comes with their render algorithms. Each having its own purpose.

Direct Light

This is the fastest Algorithm and delivers the lowest quality. It has what I would call Octane to look to it which some people tend to like but in my opinion, it is something bad. But it seems to be used more as an algorithm, to create previews.

Path Trace

It is comparable to Cycles. It doesn’t have a certain look to it but isn’t noticeably faster than Cycles. Except for things like volumetric or subsurface scattering (SSS). In which Octane is way faster.

PMC

Seems to be the same as Path Trace but slower. Which is not true. It uses another method, to distribute the light rays which makes it faster for very complex scenes with a lot of Glass for example. Or some indoor Scenes.

But there are also two downsides to this Render-Engine. One is, that Octane only works with Nvidia GPUs, not with AMD GPUs or CPUs in general.
The other downside is, with $699 per year very expensive. Meanwhile, there is also a Free Tier. But it only supports one GPU and no denoising.

Keep in mind, that you need an RTX GPU, to use Octanes Denoiser. But if that’s not the case you can still use the IOID Denoiser that comes within Blender since version 2.81. You can read about it in this article: Blender 2.81 new Denoiser (IOID) a real Game changer.

CGI motorbike with AMD Logo

The Radeon Pro Render is the same Render-Engines as the Pro Render which comes with Cinema 4D.

It is a Ray tracing Engine Developed by AMD. The only reason for its existence is, that it is the only GPU Render-Engines that work on Mac. This is also the reason the new Mac Pro gets advertised by rendering 6.8 times fast AMDs Pro Render. In comparison to an iMac Pro.

But the Radeon Pro Render is kinda dead by now, and you shouldn’t buy a Mac for CGI or VFX.

Unlike most other Engines like V-Ray, Redshift is a Biased Engine, so instead of just simulating the light, Redshfit combines different algorithms to create the Image. The difference between both of them is that Redshift from the beginning was written for GPU while V-Ray started as a CPU Renderer and meanwhile also works on GPUs.

The ability to pick different engines makes Redshift flexible and enables the user to render different scenes like indoor scenes, for example, which would take a long time to render in a normal Ray Tracing Engine like Cycles.

It means more settings, which also makes it more complicated. I think it is a fantastic engine, for people who want to render big or problematic scenes. But for most people especially beginners the price of 22$ per month will propably more then most people would want to pay in a world with so many other options and Cycles X around the corner.

Renderman is a very special Engine, it is developed and used by Pixar.
Its has been used in every Disney Movie including Star Wars, Lion King, Cars, Batman and many more.
It is relatively fast in complex scenes like indoor scenes or scenes with many polygons.
But slower than Cycles or Octane in smaller Scenes, even do it supports the use of CPU and GPU simultaneously.
What separates Renderman from the other Engines is, that it is highly flexible. It can be used for stylized renders as well as for photorealistic renders.
It is also the most feature packed render engine currently available for Blender.

For non-commercial use it is free, but registration is required for commercial use Renderman cost $250 per year.

Before Otoy released the first version of Octane renders back in 2008 V-Ray was The Render Engine.
Today, there are much more options. Especially one who promises better performance but still the most VFX in Hollywood feature movies are made with V-Ray.
It is not a Ray tracing Engine, and it is not unbiased. But this different architecture allows V-Ray, to render big scenes much faster.

Originally it was a CPU base Render Engine. But meanwhile, there is also a Version for the GPU available, but it is not as elaborated and far developed as the CPU Version.
For architects or VFX Artists who want to render complex scenes it is the way to go, but for everyone else other Engines are probably better suited since they are faster and easier to set up.

Since V-Ray get rarely used in combination with Blender, there isn’t much information about it out there. Which makes it hard to learn.

Workbench Engine

The Workbench Engine is already integrated into Blender, but I think it rarely gets used and doesn’t get the attention it deserves.

The Workbench Engine was actually created, For Modeling and Sculpting as a viewport. But it can also be helpful for other things like exporting something completely shadeless and unbiased like a screen in a Mock-up for example.

I am sure, that there are a lot of different ways, to make use out of it. I used it for example in a 2D explainer for a teeth cleaning product that was made in a flat style in After Effects, but the Client said, that it is too hard to understand what the object is, so he wanted, to make it rotate in instead or being completely flat. And to match the flat style I exported it with the Workbench Engine.

What about the other ones?

There are also Render-Engines like Yafaray, Pov-Ray, and so on. But all of them are not compatible with Blender Version 2.9+ Which makes them irrelevant.


ACES how to install the better Color space in Blender

What is ACES

ACES stands for Academy Color Encoding System for those who don’t know which Academy I am talking about. I am talking about the Academy of Motion Picture Arts and Sciences. Which is primarily known for giving out the Oscar award. So we are talking about the real deal with ACES.
Now ACES is a Color Space like sRGBRec. 2020. or Filmic.

The idea behind ACES is, to have a colorspace to work in. That means Footage from different Cameras and Animated Footage is getting converted in to ACES. Which makes it way easyer, to combine different sources. On top of that it responds like a Camera would. That makes it the ideal Colorspace for Color correction and color grading.

Why ACES in Blender makes sense.

Everyone who uses Blender for a longtime now, maybe remembers when Andrew Price (Blender Guru) published his video about the filmic color space in Blender “The Secret Ingredient to Photorealism”.
After his video, many people started installing Filmic in Blender and now it not only comes pre-installed in Blender, it is also the default color space.
But even if Filmic was an extremely helpful complement in Blender and improved, the renderings of many people, Filmic was never perfect. Which is something I discovered when i worked more in the professional world.
The renderings look pale and the color is always off and far from accurate, which is extremely problematic if accurate colors are important for a client. Fixing this in post is also far from ideal.
Which is the reason I started using sRGB or combined filmic and sRGB sometimes.

This video here, for example, was the first video where I switched back to sRGB for those reasons. This rendering is with no grading and the colors are WAAAAAAAAAAAAY more accurate than it ever would be possible in Filmic no matter how much time I would invest in to post processing.
But sRGB also isn’t ideal, which is the reason I combined renderings sometimes, but that made everything more complicated.
And this is the reason I started using ACES. It has the realistic color response of Filmic without destroying the colors.
I see it as the successor of Filmic and since it is becoming the industry standard one day, it will also be the standard in Blender.
But until this happens, you have to install it manually.

How to install ACES in Blender

Installing ACES in Blender is fairly simple. You just have to open the installation folder of Blender on your computer, which you can usually find under Program Files > Blender Foundation > Blender x.xx. In this Folder should be one folder which has the same name as the version of Blender you are using, like 2.93 for example. In this folder click on datafiles > colormanagement and what you see in this folder is what you need to replace.

Just delete everything in the folder and replace it with the files you can download here. This is contains a changed version of the ACES config file, which. is reduced from over 300 color spaces to just the view that is relevant in Blender.

Now ACES is installed, but using it is different to filmic, so read the How to use ACES in Blender section, before you use it.

How to use ACES in Blender

The first step should be to set the view transform correct. You can find the settings under Render Properties > Color Management.

By default, it should set this to sRGB. Set this to the color space of your monitor. In most cases this is sRGB but if you are using a more professional monitor which can display a bigger range of color, this can differ.

The second and most important thing you need to do is to set the color space of every image you are using in Blender correctly.

Use Utility – Linear -sRGB for HDRIs,
Utility – sRGB – Texture for Albedo Textures (Textures that show the color of something) and
role_data for any other textures like height, displacement, roughness, normal etc..


Denoising in Blender 2.93 (for every Engine)

Table of Content

Find what you need!

Denoising in Cycles

Step 1 (Activate the Denoising Data Pass)

You can activate the denoising data pass by activating the Denoising Data Checkbox. You can find the Denoising Data Checkbox in the Properties Panel under View Layer Properties>Passes>Data.

Step 2 (Switch to Compositing Workspace)

In the Compositor you can set, what should happen with the Image aft it is rendered. In this case, you want to denoise the image. Blender already set up a working space for compositing. You can switch to it by clicking on compositing at the top of the Window.

Step 3 (Activate Compositing Nodes)

To activate the Nodes in the Compositor, you need to click on Use Nodes.

Step 4 (Add Denoise Node)

To add the Denoise Node just click on Add and under Filter you can find Denoise. Alternatively, you can press Shift+A (on Windows) or CMD+A (on Mac). After that, drag the Node between the Render Layers and the Composite Node.

Step 5 (Connect the Denoise Node)

Connect the Denoise Node according to the Image. Noisy Image with Image, Denoising Normal with Normal and Denoising Albedo with Albedo.
(The algorithm has simply more information, which makes it possible, to create better results.)

Step 6 (Activate the Denoiser in the Viewport)

To activate the Denoiser in the Viewport, check the Viewport box. You can find the Checkbox in the Properties panel under Render Properties > Sampling > Denoising.
You shouldn’t use this option for denoising the final render because of a high decrease in performance.

To get back to the default Workspace, just click on Layout at the Top of the Window.

Denoising in Cycles with an RTX GPU

Simply Tick the Box Render and Viewport, which you can find in the Properties panel > Render Properties > Sampling > Denoise.
And switch from NLM to OptiX.

Make sure that you don’t use the Denoise Node in the Compositor. Using multiple denoising algorithms simultaneously can cause issues.

Denoising in LuxCore Render

Step 1 (Activate the Denoiser)

Check the Denoise Box, which you can find in the Properties panel > Render Properties.

Step 2 (Set Halt Conditions)

The LuxCore Render Engine, by default, runs forever.
The Denoiser only starts after the Render. Which is the reason you have to define a point at which the rendering process stops.
You can do that by checking the Box Halt Conditions, which you can find in the Properties panel > Render Properties.
By default, it will Use Time, which means the time (in seconds) defines when the Render Stops. You can also change this to Use Samples where the Render runs until a specified amount of samples achieved, like in Cycles Renderer. Alternatively, you can also Use Noise Threshold at which the render stops depending on how noisy the Image is.

Step 3 (Switch to Compositing Workspace)

In the Compositor you can set, what should happen with the Image after it has been rendered. Here, you want to denoise the image. Blender already set up a working space for Compositing, to which you can switch by clicking on Compositing at the top of the Window.

Step 3 (Activate Compositing Nodes)

To activate the Nodes in the Compositor, you need to click on Use Nodes.

Step 4 (Connect the DENOISED Output)

Connect the DENOISED Output of the Render Layers Node with the Image Input of the Composite Node to use the Denoised Image instead of the default Image as your output.

To get back to the default Workspace, just click on Layout at the Top of the Window.

Denoising in Octane Render

Step 1 (Activate the Denoise Beauty Pass)

Check the Beauty Box, which you can find in the Properties panel > View Layer Properties > Passes > Denoiser.

Step 2 (Activate Octane Camera Imager)

Tick the Box Octane Camera Imager (Render Mode) Which you can find if you select a Camera in the Properties panel > Camera Properties.

Note that this setting only relates to the selected camera. If you are using multiple cameras, you have to do this and Step 3 for every camera on which you want to use the Denoiser.

Step 3 (Activate the Denoiser)

Tick the Enable Denoising and Denoise Volume Box which you can find if you select the camera in the Properties panel > Camera PropertiesOctane Camera Imager (Render Mode) > Spectral AI Denoiser:.

Step 4 (Switch to Compositing Workspace)

In the Compositor you can set, what should happen with the Image after has been rendered. Here, you want to denoise the image. Blender already set up a working space for Compositing, to which you can switch by clicking on Compositing at the top of the window.

Step 5 (Activate Compositing Nodes)

To activate the Nodes in the Compositor, you need to click on Use Nodes.

Step 6 (Connect DENOISED with Image Input)

Connect the OctDenoiserBeauty Output of the Render Layers Node with the Image Input of the Composite Node.

To get back to the default Workspace, just click on Layout at the Top of the Window.


Design/ Media Trends of the next few years.

3D

Published 13th February 2021

Every year around the time of new year, articles about design trends are popping up. I wanted to do something different and list up trends or movements of which I believe they will stay for way longer than a year which is in my option much more useful information, because it enables you, to learn new things which are relevant long term.

AI Tools

More and more Software Companies like Adobe or Autodesk will create better features inside their software driven by AI. already there are features like object recognition, and AI Filters to change faces inside of Photoshop. Another really popular AI feature is the Denoising of 3D rendered images.

There are also interred Standalone programs like Cascadeur or AI Gigapixel which are built around the idea of making something better than with the initial method with the help of AI.

Those tools and features will appear more and more often in the next years and can make the creator more powerful, but that will also set a higher standard.

3D

A few years ago 3D was a remarkable and special skill I even got my first job as a Motion Designer because I had some 3D skills. But nowadays you rarely find a job description as a graphic / motion designer not including things like “basic skills in C4D or Blender is a plus” So the demand is definitely there and one day it will be the standard.

Or if you are a creator, maybe it is time to learn it. Because let’s be honest, during these times, we all sit at home and play video games or watch pointless YouTube videos and TikTok most of the time.

Here is the Tutorial, I recommend to every beginner YouTube.com/Blender Beginner Tutorial – Part 1
And here is the Download Link for Blender Blender.org/Download

Online Events

Since the last year, many events never happened the way they were planned due to contact restrictions. Which meant for some event organizers to look for alternatives.

And since online events are extremely practical since they are relatively inexpensive and not limited by location. With more possibilities, this might be part of the future, especially for events such as fairs.

Facebook Alegria

Few people heard about it but everyone saw it. Alegria is the Name of the ecosystem for Facebook Illustrations and probably one of the most copied design ecosystems on the internet.

And there are many reasons that speak for it like, it is inclusive since the people shown don’t are recognizable as humans but too abstract for anyone to feel discriminated against; The style is simple to recreate and to adapt, which saved time and money and; it works really well as a simple vector file, which makes it extremely easy and well performing for web applications.

The downside is that everything looks more or less the same. It is just not interesting it is more like a placeholder.

The actual creator of this style is the LA-based design Agency Buck.

Less demand for easy things

More small business recognizes that it is important to also be present on the web and in media, which created the market for easy-to-use tools to create media such as Adobe Spark or Canva. Also, a platform such as Fiverr profited from this need.

That means there will be less demand for graphic designers doing simple things or things for small businesses that don’t really have a real marketing budget.

Holistic Design Systems

Almost every big tech company has a Design System, a system that is adaptable for every situation and for every medium. Since it looks to become the norm, that companies are present on multiple platforms and mediums, there will be a need for holistic concepts and ideas.

Web micro-Interactions

Since Smartphones became advanced enough over the last few years to support almost everything that a desktop does and internet speed constantly increases, the web became more of a playground for designers. Websites can be bigger and more complex, which enables web developers to create a dialog instead of a monologue, a website that talks back to the user which can make communication way easier.

There most important example for this is the message got to send after clicking send on a form because without it unsettles the user. Makes him unhappy a subconsciously connects bad feelings with the website and the website owner. I think there are a lot of examples like this in which it can improve the communication of the website through micro-interactions.

There are already people whose only job it is to create such micro-interactions even do I think currently it is more part of a UI Designers Job.

VR/AR

Currently, VR and AR are more of a gimmick or a niche. But big tech names such as Mark Zuckerberg or Tim Cook think it will be the future. I think currently this is extremely fueled by the contact restrictions and lockdowns which created the need to shift more things in the online world.

Especially for Fairs, Many clients ask for VR and Real-time solutions. And currently, it is a real niche. This means perhaps it is a good train to jump on by now and maybe build a company around this field.

Human focused Design

Human focused design is the idea of designing something with the realistic behavior of humans in mind. It means to make text big enough so that it is easy to read, make things less boring so that people don’t get disinterested instantly, especially on the internet. It is about using colors with the question of how it influences the behavior of the viewer. Overall, it is about letting psychology and knowing about human behavior influence the design.

Back to basic

It is a more general development of humanity that happened in the last 20 years that people seem, to step back a bit from the current progress, that people care more about work-life balance, reducing stress, feeling connected to nature, consume less, be for minimalistic and focus on what is important.

And I think we are seeing something similar when it comes to media. Minimalism for example is a Design Philosophie that moved into the media world roughly 10 years ago. But it is more than that. People want to spend less time on their phones and also value printed media more.


Tips for better camera Animations

3D

Published 31th January 2021

Orientate yourself on what is possible in reality and what is

In the Article Tips for more realistic renderings in Blender, I already wrote about how important it is, to have reference images of something real for Orientation. And you can apply the same thing to your Camera Animations.

Focal Length

Make a research about Lenses which lenses are existing and what are they getting used for. If you are making an animation of a Landscape for example your focal length is maybe 12 mm which is the lens with the shortest focal length that ARRI offers.

If you are making an animation of something which is supposed to look far away like an airplane, a rocket, a car, or something else maybe you should use something like 155 mm or 280 mm which is the biggest focal length you can find on an ARRI lens.

If you use values far above or below that it will look a bit weird because such lenses don’t exist humans will subconsciously see, that there is something off.

Who is holding the Camera?

Is a person holding the Camera? Is the Camera on a tripod? Or is the Camera on a Camera Robot? Is your Camera a GoPro on the head of a Dog?

There is an endless number of possibilities of things that your camera can be connected with and all of them are having an influence on how the Camera is moving, which Camera and Lens is getting used and where your Camera is positioned. If you are having a Handheld Camera it is unlikely that the camera is 3 meters above the ground. It is more likely that it is 170 cm above the ground and Is shaking a bit (Ian Hubert made a quick Tutorial about that.)

Think about what is holding your Camera and how it will influence it will have.

Maybe your image isn’t 100% sharp and has some noise and distortions as well all things that can help you, to make your Animation more believable.

Use Motion Blur!

Motion Blur is just a klick. To set it up Literally and so helpful at the same time.

It makes the Animation feel more real because every video taken by a real camera has Motion Blur.

It helps you, to hide some mistakes and can hide some missing details.

Also, a director with who I am working every once in a while, is known for using Motion Blur as a transition effect.

Use depth of field

Depth of field also exists in every video and can help you as well to hide some details.

It is also a Tool, to guide the viewers’ attention. So put some thoughts about where to place the focus.

Often, I create an Empty Object which I use as a Focus point because it makes the animation of the focus much easier. It is possible without an extra object but that makes it harder and can cause problems if you change the animation of the camera later on.

Most people connect a strong depth of field with high quality like they connect a lot of base with high-quality sound. The reason for that is, Big Image sensors and lenses with a wide aperture which are both found on more expensive cameras and lenses cause more depth of field but because of that too much blur is a common beginner mistake. Use it but don’t overuse it.

Make use of the graph editor

A Keyframe describes a certain state at a certain Frame but no one creates a Keyframe for every frame. Which lets the question open for what is happening between both Keyframes is it transitioning from one state to another consistently or is it starting slow and ending slow? Is it starting fast and ending slow? This is what the Curves in the Graph Editor describe. In most cases, it is good if the Curves are smooth like Butter. But it really depends.

Avoid having no movement

This is a mistake I frequently see in bad product videos, where the animation just stops. And there is just a static image. In every good video, there is always some motion. It doesn’t need to be the camera but there always needs to be something moving.

Make some thoughts about the image composition

One thing I really like about Pixar movies is, that every Frame looks like an Image that could be printed outputted into a Picture frame. This doesn’t need to be the case but go to different frames in your Animation and ask yourself the question is the Frame itself good and how can I improve it.

There is an endless number of things and guidelines, and it is a huge topic, so I linked you a separate Article about this topic.

Chose the right frame rate

Framerates also have an impact on how an animation Feels.

6-12 Frames per second

This is the normal Framerate of an Anime. The reason for that is, that in the past every frame got drawn by hand. Which meant more Frames drastically increases the amount of time and cost that is required. Now it is more a nostalgic thing. To create this hand-drawn look.

But it also raises the question if it makes sense if more and more people aren’t able to understand this because they are simply too young to use this framerate. Because to the, it looks just choppy.

Framerates also have an impact on how an animation Feels.

23.97 Frames per second

According to Science this is the number at which the human eye sees movement instead of singles Images

24 Frames per second

The standard Frame rate in Cinema because when videos was made with film, film was expensive and more frames required more film. Which is the reason they shot slightly about the minimum requirements for human eyes. And no one saw a reason, to change that till this day.  This means if you want something to look cinematic whatever that means it is the right frame rate for you.

25 Frames per second

Is the standard Frame rate of TVs here in Europe.

30 Frames per second

It is the standard Frame rate of TV in the US and on the Internet most and the most common these days.

60 Frames per second

Makes the video feel supernatural and smooth. I think it feels fascinating and is good for a video where it is important, that people can recognize, what is happening on the screen.

Think out of the box

There are so many ways, you can create a video in a different way. No one said, that a video needs to be 16:9. Why not make upright or with multiple views at the same time. I am still waiting for the first ad agency that creates a multimillion-dollar budget video project in 9:16 and because their target group primarily uses smartphones.

But I think I know why this will take a while until it will happen.

“As I said balls, we need balls. If you know, that means”

Oliver Kahn


Building a PC for Blender

Desktop or Laptop?

Benefits of a Laptop

  • It doesn’t require any knowledge
  • it is ready to use right out of the box
  • It’s available in every Country
  • It is mobile (more or less depending on the Laptop)
  • It consumes less Energy

Benefits of a Desktop PC

  • Better price to performance ratio.
  • I can build a PC that fits my needing perfectly.
  • It is upgradeable.
  • The cooling is way better. Which allows the hardware to run cooler and quieter. which can be important for extended rendering or simulation sessions. A Laptop can heat up really quickly. which makes him slower.
  • Better performance at a certain point.

What I use

Doesn’t matter if I am at home or in my office, I use Desktop PCs only.
Mainly because of the performance. I hate to wait. But the possibility to upgrade everything. And the possibility to repair everything by just swapping parts can be a huge cost saver.

The PC I am using at home is over 10 years old by now and second hand since I bought it from my friend’s brother back when I was 14, but I swapped every single Part over time.

I also prefer sitting in front of a big setup, cause the Keyboard and mouse are Just better. Also, if I liked the Trackpad of the Apple MacBook, it’s still not as good as using a real mouse. And using multiple monitors there are even study, that proof, using 2 screens is way better for work performance and 3 screens improve the work performance even more.

How much RAM do I need?

The RAM determines how much you can do on the PC at the same time. How much Tabs you can open in the Browser, how many programs you can have open, or how many/ complex the objects can be In your 3D project. A popular misconception is, that more RAM makes the PC faster, which is not true.

The minimum that Blender requires is 4 GB, but I would recommend getting at least 8 GB. Personally, I never needed more than 16 GB.

Do I need a GPU to use Blender?

To see anything on the screen, you need to have a GPU. Most Intel CPUs come with an integrated GPU. But does it make sense to buy a dedicated GPU separately?

When it comes to rendering no matter if it is Eevee or Cycles Blender can render way faster with GPUs. Some render engines like Octane which I mentioned in the article “Every render Engine in Blender” Even require a GPU to work.

So it absolutely makes sense to get a dedicated GPU. Also, if it is not necessary to run Blender.

Nvidia or AMD?

When it comes to GPU there is always the age-old question what is better AMD or Nvidia?
But for the use in Blender, there ain’t this question now. Nvidia’s RTX Cards Support OptiX which enables Cycles to render roughly 30% faster.

There is also the real-time AI OptiX denoiser. Which only runs on Nvidia GPUs. But it is not so real-time on GTX Cards.

In Bender Version, 2.92 OptiX will also be used to create Motion Blur.

Especially since the release of the RTX 30XX series, it is pretty clear, That you will get way more for your Money with an Nvidia GPU. Provided that you can get one.

Can I use multiple GPUs for better performance?

Yes, for rendering, 2 (same) GPUs mean roughly 2 times better performance. But you can also use 3 or 4 GPUs. As far I know, the limit is at 32. But I never tested more than 4.

You can also use different GPUs. At the same time. But they have to use the same render kernel. For Example, you can’t render with an AMD GPU which uses OpenCL and an Nvidia GPU that uses CUDA at the same time. Also, if you want to render with an OptiX GPU (like the RTX3080) and a GPU that doesn’t support OptiX you can’t use the fast OptiX kernel, and you have to use the slower CUDA kernel.

You could also disable the GPU, that is connected with your monitor so that you always have a fluid interface and can even play games while rendering.

Which CPU is necessary?

Blender doesn’t require a lot of CPU power. Of course, Blender can render with CPU and GPU simultaneously, but the performance you will get from a better GPU for your money is way more, than what you will get from a CPU.

That means a middle-class CPU like an Intel Core i5 or an AMD Ryzen 5 with a good price to performance ratio or “Preisleistungsverhältniss” as we call it in Germany is just fine.

The things that can make use out of a good CPU performance are simulations. Like Fluid (also with the Flip Fluids Add-on), Or Smoke Simulations. They can make use out of every CPU Core available, even with a 64 core CPU.


Best Ressources for 3D Artists

Table of Content

Find what you need!

Screenshot of Textures Haven Website

Texture Haven is 100% financed by donations which is the reason there are no limitations. All textures are available in 8K which is more than enough for almost every project for every texture there are AO, Diffuse, Displacement, Normal and Roughness Maps available. The only problem here is that there are only 151 textures available but this will hopefully change soon.

Pricing: Free

Screenshot of Pixar One Twenty Eight Website

The most legendary 3D animation Studio Pixar developed a Texturepack themself which they are frequently updating. All the textures are seamless thanks to a technique they patented in 1993. I think it can be very interesting for people who are doing stylized renderings and characters.

Pricing: Free

Screenshot of Textutre Ninja Website

Texture Ninja is with over 2000 Textures the biggest and completely free Texture Library. The only downside is, that image size and quality vary and seamless textures, as well as PBR Textures, are rare.

Pricing: Free

Screenshot of CC= Textures Website

CC0 Textures is also completly free and no Account is required. It offers over 700 seamless PBR Textures.

Pricing: Free

Screenshot of 3dtextures.me Website

3dtextures.me also has a big variety of textures. all made with Substance Designer. Which means all of them are seamless. But to download them in 4K you need to become a Patreon. as a free user, you can only download them in 1K which maybe is not enough for some projects.

Pricing: 1K Textures Free 4K Textures starting at 5$ per Month

Screenshot of textures.com Website

textures.com has by far the biggest selection of Textures and gets used by the biggest studios around the globe. They are also offering not only textures but also 3d scans and objects. But you need to have a completely free Account. But to download textures in bigger resolutions you need a Premium Account.

Pricing: Free for low resolution Creditpack for high resolution starting at 19$

Screenshot of Poliigon Website

Poliigon is the most premium Resource for Textures and also offers the highest quality assets with a huge variety of different high resolution and seamless Textures. Poliigon also develops PLugins for Blender, Cinema 4D, Maya, and 3Ds Max for faster import.

Pricing: Starting at 12$

3D Objects

Especially as a professional artist or someone who doesn’t do 3D, primary saving time can be important. The best way to achieve this is to just not do the work. So don’t redo things that are already existing. I was never able to understand why people are doing pictures of the Eiffel Tower if chances are high that much better photos are already existing on the Internet.

Screenshot of THree D Scans Website

ThreeDScan offers free high-resolution scans of statues. Which are getting used by many 3D Artists. Every 3D model is completely free and without any copyright restriction. Don’t worry if your browser says, that the site is unsafe this is because there is no SSL certificate on the site.

Pricing: Free

Casey Cameron who also created HDRI Haven and Texture Haven has now a new project, where he uploads 3D Models completely free to use, Here are also no limitations or anything since the project runs on donations. The only downside is, that you currently can only find 39 models. But I guess this will change soon.

Pricing: Free

Screenshot of Sketchfab Website

Sketchfab started as a Social Media/ Portfolio Plattform, which enabled people, to upload there 3D Models and Share them with other people. But now it is also a market place to sell 3D models.

Pricing: Free - 290$

Screenshot of Blendswap Website

This one is only for Blender users. But it is awesome since you can not only download 3D Models you can download entirely .blend files. Which is something, that can be really helpful for beginners, who want to reengineer stuff.

Pricing: Free

HDRIs

HDRIs or Spherical images are a possibility to add lightning to a scene quickly or add some extra detail for more realism to the lightning.

Screenshot of HDRI Haven Website

This one is only for Blender users. But it is awesome since you can not only download 3D Models you can download entirely .blend files. Which is something, that can be really helpful for beginners, who want to reengineer stuff.

Pricing: Free

Screenshot of HDR Maps Website

HDRMaps is the Website with the most HDRIs. They even got used in third-party applications like Substance Painter. and they also offer a lot of freebies.

Pricing: Free and paid products

Pixar also started to create some HDRIs for their Renderman users but of cause you can use them with any Software.

Pricing: Free

screenshot of hdri skies website

HDRI Skies is specialized in Skys which means they offer the biggest variety of Skys available.

Pricing: Low resolution free high resolution 15$


Laptop Keyboard with a

Export animations in Blender

Export animations in Blender

Edited 11th December 2020

I think, exporting a video out of Blender can be a bit confusing for beginners. It also took me a while to come to this workflow I am currently using so perhaps not only beginners but also more advanced users can learn from this.

I am rendering in single frames and in a linear format. Because I want to be able to pause the rendering process and be as flexible as possible with my rendering when it comes to post-processing.

Step 1. Bake your simulations

In Blender, there are multiple types of simulations. Make sure, that you bake every single one of them, which saves the simulation even if you open and close the project. This is not only a time-saver because you don’t have to run the simulation process again. It also doesn’t create the same result every time which would mean, that if you want to pause rendering and start it again it would create a glitch in the simulation.
To achieve this you just have to click on the Bake Button in each Cache Menu.

Screenshot of Blender baking Cache

Step 2. Activate the Denoiser

In Blender Version, 2.81 Blender got the first AI Denoiser which really delivers remarkable results. You can read more about it in this article.

Comparison of a denoised by Intel Open Image Denoise and a image without denoiser

Since Version 2.82 There is another AI Denoiser by Nvidia which was developed for real-time Ray-Tracing on Nvidia’s RTX GPUs. But even if it is now possible, to use it without an RTX GPU it is much slower than using the IOID in the Compositor which means if you don’t own an RTX GPU you should do it how I described it here.

If you use a Denoiser you can use fewer samples because you don’t have to worry about noise.  Which can lower your Export times dramatically.

Step 3. Check everything

Watch if everything you want to render is enabled in the Outline Tab, check if the resolutions you set are correct, and make a Test render, to make sure, you don’t forget anything.

Step 4. set your Output settings

Create a folder where you want to have your files exported in because you will have a single Image for every Frame.
And Chose OpenEXR as File Format. That will not give us the Highest Image Quality. It is also a linear Format without any Color space, Which gives you the most possibilities when it comes to choosing the Color space and post-processing without losing any quality.

Screenshot of Blender render Output settings

Step 5. Start the rendering process

Before you start the rendering process don’t forget to save. Also, don’t worry if you open up one of the Images it will look a bit weird which is caused by your OS don’t know what it is and how to work with it.

Screenshot of Blender clicking on Render Animation

Step 6. Import the Images in Davinci

Davinci Resolve is the standard Hollywood color grading software. But it is also possible to edit with Davinci and the standard version is completely free and doesn’t have any limitations at the basic functions. It is also possible, to edit inside of Blender currently this is not recommendable.
To import the images just create a new project and click on File> Import File> Import Media, then navigate to the Folder, where you exported the Images in and select all of them.
And then just drag the footage on the timeline.

The images don’t contain any Information about the Framerate which is the reason you have to set the Framerate manually if it is not 24. You can do that by right-clicking on the Footage and then click on Clip Attributes and change the Framerate.

Screenshot of Davinci Resolve importing Media...

Step 7. Change the Colorspace

A Benefit of exporting everything in a linear Color space is, that you can switch the Color space after the rendering process. A reason for this could be more realism from the filmic Color space or more color accuracy from the standard Color space.

To change the Color space you have to switch to the Fusion Tab which is something like the Compositor in Blender but much more resilient and advanced.

Screenshot of Davinci Resolve importing Media...

Then add an OCIC Color space Node by right-clicking > Add Tools > Color > OCIO Color space.
Hold down Shift and place the Node between both of the other Nodes.

SCreenshot of Davinci Resolve adding OCIO Colorspace Node

In the Inspector Panel on the right side, you have the settings for each Node.
For the OCIO Color space Node load the OCIO Config file of Blender. Just click on Browse and navigate to the config.ocio file. You can find it in the installation folder in datafiles > color management config.ocio.

Screenshot of Davinci resolve loading in Blender OCIO file

After this set the Source Space to linear and the Output space to which color space do you prefer. For me, this is filmic sRGB or sRGB in most cases.

Screenshot of Davinci Resolve with OCIO Colrspace Node

Step 8. Export

What you do next depends on you. You can post-process it and color grade it or just export it.


Tips for more realistic Renderings in Blender

10+1 Tips to get more realistic renderings in Blender.

Updated 16th December 2020

In this article, I want to give you some Tips at the hand to get more realistic Renderings and also explain why you should use them and what they actually do, to give you some deeper knowledge.

Everything here can be applied to CG in general, but it is written specifically for Blender.

Use HDRIs!

HDRIs or Spherical Images how they got called are Images that get wrapped around a 3D scene to light the Scene. The benefit of using HDRI lighting is, that it brings a lot of details with it which is the reason, it makes the render much more realistic and on top, it is pretty easy to set up.

I usually supplement the HDRI with Emissionplanes or normal Lights to shape the lightning the way I want it.

Since HDRI means “High Dynamic Range Image” Images with a higher Range of Dynamic are used for this which is recommended because it makes a huge difference also if you don’t export your images in a High Dynamic Range. You can recognize these images by their format. Formats like Radiance HDR, OpenEXR, TIFF and some other formats can contain HDR Images while Formats like Jpeg or PNG can not.

If you want to know, where you can download HDRIs I can recommend my article about the Best Ressources for 3D Artists.

To everyone who wants to create Studio light, I recommend the Blender Studio Light Addon. It is a free Addon that allows you, to create your HDRIs within Blender by using images of real Lamps which like normal HDRIs also give you the advantage of the extra detail to create a more realistic Renderings.

Bokeh

little girl on a hourse of a carousel 3d animated

By using the depth of field feature which you can find in the camera menu you can create Bokeh which not only makes your Image more realistic since it emulates something which is happening in every Camera.

It also gives you the possibility, to hide things you don’t want to show like parts with fewer details or seams in the texture. because let’s face it you cant create everything with high details because it would eat up way to much time in some cases.

Another benefit of using depth of field is that it enables you to guide the viewer’s intentions which is something to make the image more beautiful on a subconscious level.

Bevels

Blender 3D Screenshot creating Bevel on a Cylinder

If you look around you, no matter where you are you will see, that there aren’t many objects with sharp edges around you. There are some exceptions like sharp or really thin objects. But these are just exceptions. And making something look realistic means making it look like it would in reality.

Also If you create some more abstract motion graphics this small light Edge can enhance the final result.

Since Blender version 2.8 you can also create a Bevel inside the Shader by using the “Bevel” Node. The Samples equal the resolution and the Radius the since of the Bevel. The benefit of this method is, that it uses less computing resources but when it comes to quality a Bevel made out of Mesh still looks better.

Screenshot of Nodes creating Shader for Blender 3D

Motion-Blur

Motion Blur is something, which is in every Photo or Video sometimes it is so little that it is unvisible or only gets recognized subconscious but technically speaking it is there all the time. Which is the reason I recommend to use it if realism is your goal?

In Blender 2.8 you just have to tick the Box in the Motion Blur Menue which you can find in the Render Settings Tab.

The default options should be right most of the time but I still explain the settings to you.

Shutter

The Motion Blur is a build-up of two Effects the first one is called Shutter. Which in real life happens because the Camera captures the light through a small period of time. usually the half-length of a Frame. This means if you filming with 24 Frames per second the period of time in which the light hits the sensor is 1/48 second. with 25 Frames it is 1/50 second with 30 frames it is 1/60 and so on.

The longer the time is the light can hit the sensor the more Motion Blur you will get and vice versa. You can set the amount of time the light hits the sensor per Frame with the Shutter Option. Which is by default set to .5 which means how I mentioned the half-length of a Frame (24FPS 1/48s, 25FPS 1/50s, 30FPS 1/60s…) Which is the setting most cameras also have by default. You can also change this to 1 for Example which would mean the Light would hit the sensor the complete Length of the Frame which means 24FPS 1/24s, 25FPS 1/25s, 30FPS 1/30s…

You can use this if you want to combine Video and CGI and you had another Setting for the Shutter in your camera but most of the time .5 will give you the most aesthetically pleasing results.

Screenshot of Blender 3D Software Motion Blur Settings

Rolling Shutter

The other Effect is The Rolling Shutter. The reason, that this happens in real life is that the camera captures the Images in Lines from Top to Bottom. This happens quickly but not at the same time which is the reason you can see it when it comes to fast-moving objects like cars or planes. It also happens with fast camera swivels.

In Blender, you can activate it by selecting Top-Bottom for The Rolling Shutter which is currently the only option. You can set the strength with the Rolling Shutter duration but also here the default settings will do their job most of the time.

Also if Rolling Shutter gives you more realism you should ask yourself if you want to use it because it is less aesthetical appealing compared to the normal