

Stuck on CPU Ray Tracing in perpetuity apparently.


The performance overall, and especially performance/watt, that Apple has achieved with the chip is very impressive. The Apple M1 available in the MacBook Air, MacBook Pro 13, and Mac Mini has been the focus of a ton of benchmarking writeups and blog posts about the new chip. Regardless of whose shoulders this falls on, as a consumer using a Mac and CA, it's hard to come to any conclusion that Mac users are stuck with a vastly inferior product compared to the Windows version of CA, but for the same application and SA renewal price? How is that just a "shoulder-shrug ok//hopefully this all works out in the future" justification on the part of CA? This is all just not okay. A Dive into Ray Tracing Performance on the Apple M1. Always Connected PCs powered by Windows 10 on Arm, Apple Mac computers powered by the M1 chip. I get the other replies here that this is apparently an Apple issue. 3DMark Port Royal, the real-time ray tracing benchmark. The Real Time Ray Trace omission is not really a big deal in my opinion, but the now VAST difference in CPU v GPU overall RT times-sorry but this is infuriating. I downloaded x13 beta, just to try this faster RT on my 16" MBP only to have pas s 1 take 9 minutes, which is seemingly even slower than x12 (just finished pass 3 at 28min). The only issue disclosed by the webinar related to Mac's was incompatibility with the M1, which is something that CA is working on and makes sense. The webinar about x13 features // ray tracing was SUPER misleading about this, it seemed to indicate that using Metal would have the same impact on GPU rendering as the Windows based GPU rendering engine. It's hard to justify buying a $4,000 PC just to Ray Trace. I have a $4,400 Mac Pro and a $3,900 MacBook Pro. I'm extremely disappointed that X13 Real Time Ray Tracing is not supported on the Mac.
