How Is The Price Per GHz/hour Being Calculated And Why?
First of all, this is not something new that we invented, it has been used for a long time now by most of the render farms to set the price of their service.
The reason I am explaining it is because I got my head wrapped around it when I first heard it and there might be other people that don’t fully understand what they are actually paying for or how is this being calculated. This article will help you understand how this works and why we are using it to calculate the costs, instead of setting the price per machine like Google or other cloud providers does with their compute services.
The Processor speed is measured in GHz, a 3.1 GHz processor has an internal clock that beats 3.1 billion times per second, each clock beat representing an opportunity for the processor to manipulate a number of bits equivalent to it’s capacity – 64 bit processor can work on 64 bits at a time, that’s what most of the modern CPU uses now. This sounds a bit more complicated than it is, but what you need to understand that usually if a CPU has more GHz, it’s going to perform the tasks given to it faster.
Back in the days there were processors with a single core, a 3.1 GHz processor with 1 core had a total speed of, you guess right – 3.1 GHz.
Nowadays we have CPU’s with 6, 8, 10, 18 cores and more, the prices goes up with the number of cores and cache size.
I will stick on the affordable 6 core 2.4 GHz CPU which delivers a total of 14.4 GHz (6 x 2.4), (hyper threading is not being calculated, as those are virtual cores which help the physical ones perform their task better, this matters in calculation speed, but it’s not something we charge on our render farm, so it stays out of the equation). Now let’s get back to our original question, on how the price is being calculated based on used GHz per hour.
A render node has usually 2 CPU’s, delivering a total of 28.8 GHz. If we render a frame on that node which takes exactly 60 minutes to render and the CPU has been used 100% all the time, we will say that the particular frame used 28.8 GHz per hour. The thing is that the CPU is not used 100% during the entire rendering period, because there are some other tasks that needs to be performed and are not so CPU intensive, such as copying the scenes and assets to the render node, loading the 3D Application, dumping the frame from RAM to disk, etc…
For this reason, we charge for the number of Clocks that has been used (AKA GHz), which is how many times a second a processor beat and computed a task… we end up with huge numbers to compute, but the formula is quite simple to apply and we have software which gives us the exact number of clocks a processor performs for a single frame, being able to transform that in the end, in how many GHz per hour that frame used.
Being able to know the amount of clocks needed to render a frame, allows us to use multiple types of hardware configuration and sticking to one price measurement unit. For example the number of clocks being used by a frame, is the same on a 2.4 GHz, 6 cores CPU and a 2.4 GHz 4 cores CPU. It will take longer to render on the 4 cores one, but the price for that frame should be the same (at least on paper, as I’ve seen some render services which weren’t calculating this number properly, serving a 14 min. in duration frame twice more expensive than a 7 min. one, and it was clear that was rendered on a slower machine, as the frames were next to each other without significant scene changes, anyway, I hope someone saw it too and reported, otherwise people are paying more for older hardware).
I hope this article made this matter a bit more clearly for you. Use our online calculator to test a few values to understand better how the time it takes to render a frame reflects in the final price. Better than this is just creating a new account and spending the FREE $20 worh of credits to render your own projects to better understand how much you will spend using our service.