Does VorpX induce a CPU bottleneck like 3D Vision does?

Homepage Forums Technical Support Does VorpX induce a CPU bottleneck like 3D Vision does?

Viewing 14 posts - 1 through 14 (of 14 total)
  • Author
    Posts
  • #195117
    vulcan78
    Participant

    Long time 3D Vision user here, just got into VR (Valve Index), with 3D Vision there is a problem where the effect seems to take an entire core / thread onto itself and leaves only 3 remaining cores to handle the game, this is known as the “3 Core Bug”, my question is, does VorpX also suffer from this problem?

    Also, is there any reason DX12 titles do not work with VorpX?

    Someone over in 3D Vision (TimFX7) actually managed to get Shadow of the Tomb Raider working in 3D Vision in DX12. Theoretically 3D Vision can work with DX12, it’s just a tremendous amount of work and none of it is being done by Nvidia seeing as how they discontinued 3D Vision support as of driver 425.31.

    https://www.mtbs3d.com/phpBB/viewtopic.php?f=181&t=23410

    I am completely new to VorpX (haven’t purchased yet, whether or not 3 core bug is present will heavily influence my purchasing decision) and somewhat understand that 3D is accomplished via SBS and not 3D Vision so I am assuming that there is no “3 Core bug” present?

    Thanks for any help with this.

    #195121
    Ralf
    Keymaster

    With Geometry 3D vorpX, just like 3D-Vision, has to submit twice as many draw calls to the GPU, one for each eye, which essentially doubles the workload for the GPU and also implies substantially more work on the CPU side. That’s not a bug though, neither in 3D-Vision nor in vorpX. I’m not sure why among members of the 3D-Vision community this frequently gets called “3 core bug”. One of those odd urban legends where you seriously wonder how they could ever have come into existence.

    The actual fact of the matter is that submitting work to the GPU in most current game engines is still largely a serial task. It doesn’t automatically get parallelized just because a CPU has more cores. Now if one core gets maxed out by the main render thread, the whole game is limited at that point, no matter how many cores are left doing nothing. That’s not a bug, that’s just how things work. The only ones who can change that are the game devs by making better use of multithreading in their rendering code (as far as possible, there are limits to that).

    #195126
    vulcan78
    Participant

    With Geometry 3D vorpX, just like 3D-Vision, has to submit twice as many draw calls to the GPU, one for each eye, which essentially doubles the workload for the GPU and also implies substantially more work on the CPU side. That’s not a bug though, neither in 3D-Vision nor in vorpX. I’m not sure why among members of the 3D-Vision community this frequently gets called “3 core bug”. One of those odd urban legends where you seriously wonder how they could ever have come into existence.

    The actual fact of the matter is that submitting work to the GPU in most current game engines is still largely a serial task. It doesn’t automatically get parallelized just because a CPU has more cores. Now if one core gets maxed out by the main render thread, the whole game is limited at that point, no matter how many cores are left doing nothing. That’s not a bug, that’s just how things work. The only ones who can change that are the game devs by making better use of multithreading in their rendering code (as far as possible, there are limits to that).

    No actually 3D Vision limits the number of cores used to 3 cores, this has nothing to do with draw call count or GPU workload.

    https://www.nvidia.com/en-us/geforce/forums/3d-vision/41/239588/3d-vision-cpu-bottelneck-gathering-information-thr/

    This is why I’m asking this here, does VorpX do something similar / will I be limited to 3 physical cores with it?

    #195129
    Ralf
    Keymaster

    That explains a lot. Guess I don’t have to wonder anymore how that myth arised. Considering the author of that post all I’ll say is that half-baked superficial knowledge combined with strong opinions and the drive to share one’s ‘wisdom’ with the world is probably the most dangerous thing developed by mankind besides maybe the hydrogen bomb. Luckily in this case the only harm done was causing confusion among a stereo 3D community. ;)

    I have no reason to defend nVidia, but imagine the following simplified scenario: a CPU with two cores and a game using two threads, one for rendering, one for physics. In mono both threads each fully utilize ‘their’ core, overall CPU usage 100%. Now, again for the sake of simplicity, let’s say stereo rendering doubles the CPU workload of the render thread, causing the framerate to drop to half purely due to the higher CPU workload. The first thread (rendering) would still fully utilize its core, but the second thread (physics) now would only have to utilize 50% of its core since only half the frames need physics calculated compared to before. Result: instead of 100% CPU usage mono you get 75% CPU usage with stereo.

    No mysterious ‘3 core bug’ anywhere, everything working as expected. The ‘culprit’ is the added stereo workload on the render thread, as simple as that. The numbers will differ in a real world example and vary from game to game, but I hope you get the idea.

    #195134
    vulcan78
    Participant

    That explains a lot. Guess I don’t have to wonder anymore how that myth arised. Considering the author of that post all I’ll say is that half-baked superficial knowledge combined with strong opinions and the drive to share one’s ‘wisdom’ with the world is probably the most dangerous thing developed by mankind besides maybe the hydrogen bomb. Luckily in this case the only harm done was causing confusion among a stereo 3D community. ;)

    I have no reason to defend nVidia, but imagine the following simplified scenario: a CPU with two cores and a game using two threads, one for rendering, one for physics. In mono both threads each fully utilize ‘their’ core, overall CPU usage 100%. Now, again for the sake of simplicity, let’s say stereo rendering doubles the CPU workload of the render thread, causing the framerate to drop to half purely due to the higher CPU workload. The first thread (rendering) would still fully utilize its core, but the second thread (physics) now would only have to utilize 50% of its core since only half the frames need physics calculated compared to before. Result: instead of 100% CPU usage mono you get 75% CPU usage with stereo.

    No mysterious ‘3 core bug’ anywhere, everything working as expected. The ‘culprit’ is the added stereo workload on the render thread, as simple as that. The numbers will differ in a real world example and vary from game to game, but I hope you get the idea.

    If this were the case of a thread responsible for rendering frames being saturated and needing to offload some of it’s workload onto another thread then we would see this problem running the game in 2D at 120 FPS but this isn’t the case with 3D Vision. Example, say I run GTA 5, Prey or Rise of the Tomb Raider in a CPU intensive area (DX11) @ 120 FPS in 2D, everything maxed, at 2560×1440 with 2080 Ti and 8700k @ 5.1 GHz no problem, but 3D performance is, it’s like 1/3rd, an abysmal 40 FPS, or 80 FPS 2D. Even DX11 has the ability to use more than 4 cores / 8 threads for rendering, i.e. Crysis 2 / 3 or Watchdogs 2 for example, which sees even distribution of workload across CPU’s of greater core count. 3D Vision actually limits the game to only being able to run on 3 physical cores for some odd reason, so an 8700k would yield 0% performance advantage over 7700k and even 7700k would be down a core trying to process any game. I get and appreciate your logic that rendering would actually decrease the CPU load (because physics being done on a single thread would have reduced workload) but that’s not what I’m describing, 3D Vision limits the number of available physical cores to 3 cores for some odd reason, the technical nature of which escapes me right now. There is around a 50% drop in performance due to 3 core bug. For example, I can run a game @ 120 FPS with full load on the GPU in 2D but because 3 core bug limits the number of physical cores available to 3 in 3D those 3 cores may be incapable of rendering the game at 120 FPS (even the the effect renders @ 60 FPS there are still 120 frames being created simultaneously, one for each eye). Usually a CPU demanding title that I can run at 120 FPS avg with full / near full usage on the GPU will only run at 40 FPS, a performance drop of 40 FPS 2D.

    #195135
    Ralf
    Keymaster

    I don’t have anything to add. If the boiled down example doesn’t make it understandable for you, I don’t know what could. And honestly I have better things to do than arguing with you. Like I said, I obviously have no reason to defend nVidia, there just is no “three core bug” in 3D-Vision. May sound counter-intuitive to you, but even with the GPU left out of the equation reduced overall CPU usage simply is the logical consequence of a reduced framerate in a multithreaded game running on multiple cores.

    #195148
    vulcan78
    Participant

    That’s not correct, you refuse to listen, this has nothing to do with a thread being completely used up by the effect, 3D Vision actually limits the number of cores available to three.

    I’ve been around 3D Vision since 2013, been into PC gaming since 2010, here’s my computer:

    I’m not a noob, I know what I’m talking about, your need to come down from your technical high horse. You have this “I know what I’m talking about and youre spouting rubbish” position and refuse to listen.

    3D Vision limits the number of physical cores available to three. The “3 Core Bug” problem is real and well known.

    https://www.nvidia.com/en-us/geforce/forums/3d-vision/41/200937/3d-vision-cpu-core-bottleneck/

    “Hi everyone!

    Been getting quite frustrated with the poor performance I’m getting in 3D and I think Ragedemon has worked it out:

    https://forums.geforce.com/default/topic/825678/3d-vision/gta-v-problems-amp-solutions-list-please-keep-gta-discussion-here-/19/

    When using 3D Vision games only use up to 3 cores. I tried it myself with GTA 5 scaling up the number of core affinity to GTA 5. Once you’ve got to three that’s your lot, leaving GTA running between 35-45 FPS, using only about 40% GPU on my 2 gtx 980s in SLI.

    I then disabled the 3d Vision driver and of course the FPS shot up to over 120fps and I get gpu usage of close to 100%. Disabled cores and discovered that with 3 cores enabled I get exactly the same performance as with 3d Vision enabled (without 3d of course) the same low gpu usage and low fps. Adding back the cores increases usage add brings it back to over 120fps.

    Of course this doesn’t matter for alot of old games, but I think we are going to find this a big problem in the future. Something that hardware upgrades can’t even help with!

    Anyone noticed this in any other games? Ragedemon says that it happens in AC4 Black Flag and CoD: Advanced Warfare too.

    Anyone found a workaround?

    Is it too optimistic to hope could be fixed in future drivers?”

    #195155
    Ralf
    Keymaster

    Listen, I tried to answer your question by providing you an example that makes it as easy as possible to understand what actually happens technically in a multithreaded game when you raise the load on the main thread. That apparently wasn’t what you wanted to hear, so I apologize for that.

    People believe all sorts of imaginary crap, half the internet is full of it. If you want to believe in a ‘3 core bug’ in nVidia’s 3D-Vision driver, I have zero problem with that. I mean why trust a reasonable explanation when you can also believe in some conspiracy theory that involves nVidia willingly hampering 3D-Vision performance by not fixing some odd ‘3 core bug’ for several years.

    I had a gut feeling that it might be better to just ignore your post right away, should better have listened to it.

    #195175
    vulcan78
    Participant

    Listen, I tried to answer your question by providing you an example that makes it as easy as possible to understand what actually happens technically in a multithreaded game when you raise the load on the main thread. That apparently wasn’t what you wanted to hear, so I apologize for that.

    People believe all sorts of imaginary crap, half the internet is full of it. If you want to believe in a ‘3 core bug’ in nVidia’s 3D-Vision driver, I have zero problem with that. I mean why trust a reasonable explanation when you can also believe in some conspiracy theory that involves nVidia willingly hampering 3D-Vision performance by not fixing some odd ‘3 core bug’ for several years.

    I had a gut feeling that it might be better to just ignore your post right away, should better have listened to it.

    No, you did not listen and are still not listening, 3D VISION DOES NOT SATURATE A THREAD NEEDING TO OFFLOAD WORKLOAD ONTO ANOTHER THREAD, 3D VISION LIMITS THE NUMBER OF PHYSICAL CORES AVAILABLE TO THREE, HENCE THE NAME OF THE PROBLEM “3 CORE BUG”.

    If it were simply a case of saturating a thread / core then the problem would be resolved by using a CPU of greater core count, i.e. 8700k, 9900k. There are many DX11 titles that have decent multi-threaded support, i.e. Watch Dogs 2, i.e. Crysis 2/3 just to name a few. If you try to play THOSE titles in 3D Vision THERE ARE ONLY THREE CORES IN USE.

    #195177
    Ralf
    Keymaster

    Yes, of course. nVidia’s engineers are fools or maybe they wanted to personally annoy you and your friends with this ‘3 core bug’ that they never fixed for years, who knows. Apologies again for doubting that might be the case.

    Serioisly, one last try:

    If a game is CPU limited, one thread, typically the main render thread, always saturates a core. That’s why the game is CPU limited in the first place. Now when you further raise the load on that thread, like 3D-Vision does, your framerate sinks and secondary threads have to do less work than before since now less frames are rendered. Hence you see the overall CPU usage going down. That’s what my example was about. If a game happens to be GPU limited, the same applies, just now not even the main thread fully occupies one core.

    The opposite of your conclusion is true: adding more cores doesn’t change anything beyond the point where the CPU isn’t used 100% anymore (leaving out unlikely edge cases here). If for example a hypothetical game is programmed to have three threads, there will be no difference between running this game on a 4-core CPU or a 10-core CPU. This is perfectly normal behavior.

    What might make this a bit confusing is that the OS’s scheduler potentially can shift around a program’s threads between physical cores at will, which can skew the impression you get when you look at your task manager. Maybe that tricked you or whoever observed this ‘bug’ first into thinking no thread is maxed out although in fact it is. Just a guess.

    Whether you want to believe me or not is up to you obviously, and I honestly don’t care. I can just try to explain from the perspective of someone who deals with this stuff regularly. Maybe nVidia should have done that before a considerable amount of people convinced themselves that there is some weird ‘3 core bug’ in 3D-Vision. Now it’s obviously too late.

    #195215
    Minabe
    Participant

    Just read your other post and i guess this is why you wrote it, don’t beat yourself up Ralf, the easy answer was to simply state that 3D-Vision isn’t even supported anymore by nVidia and no machine uses it any longer as any “new” driver doesn’t even comes with it, which makes anything related to it completely obsolete and simply not applicable anymore, including the “3 core bug” which still happens even without 3D-Vision bc it simply isn’t a bug.

    #195519
    RJK_
    Participant

    I really wonder who was on a high horse here, no wonder Ralf got pissed. The answers above probably took hours to type in .

    #195532
    DADDYPANK
    Participant

    Wow, just read the whole post. This ‘3 core bug’ is obviously a name given to something not fully understood but seem to fit observations. However when someone else comes along (who is clearly and vastly more experienced in such matters) explains what’s really going on, you need to adjust your understandings and move on.

    That’s how we learn.

    #213911
    v301
    Participant

    Thanks for your reply, Ralf.
    Now many people are probably wondering how important single-core and multi-core CPU performance is.

    Should I use 6 (32MB L3) or 12 (2x32MB L3) core cpu?

Viewing 14 posts - 1 through 14 (of 14 total)
  • You must be logged in to reply to this topic.

Spread the word. Share this post!