The measure of your compute effiency isn’t measured by your capacitance and resistivity. It’s measured in your electron loss during switching operations. Lower resistivity means for each switching operations you are letting more electrons through per switch, which is *bad* for compute efficiency and heat and power consumption. You want to pass through less electrons per switch, if possible. That’s what drives performance. Transistor efficiency is *all* about trying to do more work with fewer electron.
So long as your processor doesn’t go over 100 degrees is a huge caveat. Even everyday consumer processors when doing heavy loads can actually go beyond that temperature quite easily. Remember, what matters for your transistor efficiency is the *local* temperature, and that is specifically the local temperature in a very small area. Your whole device doesn’t need to be at 100 degrees for your processor to be. When your device is heating to 30 or even 40 degrees C it’s basically heat sinking for the thermal load generated by your processor. If your whole device, representing a much bigger area, can get that hot from absorbing the heat from your processor, that’s telling you a lot about how hot your processor, the generator of the heat, is actually getting.
You can try to finagle around the difference in performance for each node shrink as much as you want but the figures are what they are. When they say a node shrink gives 15-20% more compute or 15-30% less power consumption, those figures are what they are, and if you’re two nodes behind those differences *compound*. 60 microns to 6 nanometers may represent a performance difference of many multiples, and that’s certainly not the level of difference we’re talking about, but a 30-50% difference in performance (compounded across two or three nodes of performance gains) is nothing to sneeze at either.