Jump to content

Welcome to ExtremeHW

Welcome to ExtremeHW, like most online communities you must register to view or post in our community, but don't worry this is a simple free process that requires minimal information for you to signup. Be a part of ExtremeHW by signing in or creating an account.

  • Start new topics and reply to others
  • Subscribe to topics and forums to get email updates
  • Get your own profile page and make new friends
  • Send personal messages to other members.
  • Take advantage of site exclusive features.

Cryengine Neon Noir Benchmark, Submit Your Scores !


Recommended Posts

EHW Content Creator

Hi my name is David and i'm a bencher...

 

I've been seriously into Hwbot and competitive benching for the past 10 years. When a new benchmark comes along I am usually pretty quick to check it out. So today I found a new benchmark from Crytek, the "Neon Noir Ray Tracing Benchmark". Have you seen this or heard about it?

 

Who wants to run this benchmark and post some scores?

 

Check it out here: https://www.cryengine.com/marketplace/product/neon-noir

 

 

d1ea277e3883.jpg

 

 

 

[GS]1zD_60MqxNp3xy-9Lho5N_uds_8-nUhyH4CV2NRwf0Kk[/GS]

 

LINK

 

[GS]1dgi4dk6eThDUOgj1G2XeJg5akZJjR2j2r8s7INh-Qgg[/GS]

 

LINK

 

[GS]1i-RrRVK_s5T22fWyj7YbsBFKNw4B2FIbUrebiwBznRA[/GS]

 

LINK

 

[GS]1tUObOTfbsgKO0BrdT4Bs5OXZAUfdd2oDEjMv3ygac7c[/GS]

 

LINK

Edited by ENTERPRISE
Link to post
Share on other sites
  • Replies 70
  • Created
  • Last Reply

Top Posters In This Topic

The benchmark is pretty cool. Something in having an issue with is AMD/Nvidia.

running this bench on a 1080ti one in my 8700k build and the other in my 3900x build. Same settings drivers and Windows updates.

intel with 1080ti gets about 100-120fps

Amd with 1080ti is only getting 20-23fps.

 

Not sure why. Can someone with a Intel and amd setup try this?

Link to post
Share on other sites
  • 3 weeks later...

What settings are you guys running? We should establish a default so our scores are a bit more normalized... Crytek should show relevant specs in the results ;)

 

What a nice benchmark. The reflections look absolutely stunning, no difference that I can see between that and the "established" raytracing w/ RT cores.

 

Here is a 980Ti @ 1420mhz core / 3900 mem (on default and 1080P) 4000 mem (4k)... It performs shockingly well at 1080P, this tech would easily make a 1080Ti viable in games for a few more years. Not sure how the 980Ti would fare once more assets are thrown in the mix, probably fairly well.

 

Default (I believe that was 1366x768), RT ULTRA, windowed, no loop

1032408084_neonnoir.thumb.PNG.13c4537e07490a5a99b080d8df36843d.PNG

 

1080P, RT ULTRA... windowed, no loop

1161206230_neon1080.thumb.PNG.e2e6ce78fce7e86a8b9a024e72a104ea.PNG

 

for giggles... 4k, rt ultra, windowed, no loop.. that ran about how you would expect a benchmark to run on contemporary hardware :p

494338591_neon4k.thumb.PNG.39d110ab8d5244e57909c242ed769e4f.PNG

Edited by 486
Link to post
Share on other sites
What settings are you guys running? We should establish a default so our scores are a bit more normalized... Crytek should show relevant specs in the results ;)

 

What a nice benchmark. The reflections look absolutely stunning, no difference that I can see between that and the "established" raytracing w/ RT cores.

 

Here is a 980Ti @ 1420mhz core / 3900 mem (on default and 1080P) 4000 mem (4k)... It performs shockingly well at 1080P, this tech would easily make a 1080Ti viable in games for a few more years. Not sure how the 980Ti would fare once more assets are thrown in the mix, probably fairly well.

 

Default (I believe that was 1366x768), RT ULTRA, windowed, no loop

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon noir.PNG Views:\t0 Size:\t517.1 KB ID:\t3122","data-align":"none","data-attachmentid":"3122","data-size":"medium"}[/ATTACH]

 

1080P, RT ULTRA... windowed, no loop

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 1080.PNG Views:\t0 Size:\t821.8 KB ID:\t3123","data-align":"none","data-attachmentid":"3123","data-size":"medium"}[/ATTACH]

 

for giggles... 4k, rt ultra, windowed, no loop.. that ran about how you would expect a benchmark to run on contemporary hardware :p

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 4k.PNG Views:\t0 Size:\t2.09 MB ID:\t3124","data-align":"none","data-attachmentid":"3124","data-size":"medium"}[/ATTACH]

 

Interesting result so a 980Ti does better than my Stock Rx Vega 56. so Crytek has to improve their DX11 performance or at least release this "Raytracing feature" for Directx 12 and Vulkan

Link to post
Share on other sites
What settings are you guys running? We should establish a default so our scores are a bit more normalized... Crytek should show relevant specs in the results ;)

 

What a nice benchmark. The reflections look absolutely stunning, no difference that I can see between that and the "established" raytracing w/ RT cores.

 

Here is a 980Ti @ 1420mhz core / 3900 mem (on default and 1080P) 4000 mem (4k)... It performs shockingly well at 1080P, this tech would easily make a 1080Ti viable in games for a few more years. Not sure how the 980Ti would fare once more assets are thrown in the mix, probably fairly well.

 

Default (I believe that was 1366x768), RT ULTRA, windowed, no loop

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon noir.PNG Views:\t0 Size:\t517.1 KB ID:\t3122","data-align":"none","data-attachmentid":"3122","data-size":"medium"}[/ATTACH]

 

1080P, RT ULTRA... windowed, no loop

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 1080.PNG Views:\t0 Size:\t821.8 KB ID:\t3123","data-align":"none","data-attachmentid":"3123","data-size":"medium"}[/ATTACH]

 

for giggles... 4k, rt ultra, windowed, no loop.. that ran about how you would expect a benchmark to run on contemporary hardware :p

[ATTACH=JSON]{"alt":"Click image for larger version Name:\tneon 4k.PNG Views:\t0 Size:\t2.09 MB ID:\t3124","data-align":"none","data-attachmentid":"3124","data-size":"medium"}[/ATTACH]

 

Thanks for sharing, I tell you what it does to show that with the correct software optimization and low level driver/software optimizations that older hardware can still very much hold its own. Granted 4K may be a struggle but at the end of the day the 980Ti was not designed to be bashing out high frames at that resolution. I would love to see other companies adopt this method, it would be a win win, those without the RT cores can still get some goodness and those with RT cores one would assume may even get better performance (Though that is a guess). That being said not sure how Nvidia would feel about this lol.

Link to post
Share on other sites

Hmm maybe the default setting after installation was based on hardware. Mine definitely started out at 1366x768. No matter though, I just think they should show your settings + hardware config and clock speeds in the results. And yeah look at that, I couldn't tell a difference between very high and ultra

 

 

 

Thanks for sharing, I tell you what it does to show that with the correct software optimization and low level driver/software optimizations that older hardware can still very much hold its own. Granted 4K may be a struggle but at the end of the day the 980Ti was not designed to be bashing out high frames at that resolution. I would love to see other companies adopt this method, it would be a win win, those without the RT cores can still get some goodness and those with RT cores one would assume may even get better performance (Though that is a guess). That being said not sure how Nvidia would feel about this lol.

 

Would have to see more of what comes from this demo. If you look into some analysis done on it, it's super heavily optimized e.g. very few moving objects, only nearby objects in the reflections are ray traced while distant objects are rendered using other methods. Only mirror like objects get raytraced reflections. Low poly object reflections. Low res reflections.

 

That said.. that's all being done using traditional shaders and it looks incredibly fantastic, but how expensive is it on the shaders? e.g. how would this perform in an actual game like Battlefield V relative to RTX setting on low or medium using an RTX card? As far as I can tell the demo and engine are proprietary and so are rtx cores minus some very basic understanding I have of their function and the algorithm they were designed to optimize... No way to make a guess..

 

I see this going the way all of Nvidias proprietary hardware implementations go.. Nvidia will have the decidedly best and most advanced (and premium costing) solution for a number of years until either traditional shaders get modified or AMD (or intel?) makes an open source alternative. Nvidia will also politely convince AAA devs to implement hardware raytracing so we all wish we had their card (this is a bit better than hairworks :p)

 

 

Link to post
Share on other sites

Granted the benchmark isn't game accurate whereby there are other things going on in the scene as there is only one true moving object (The Drone) so it can be argued that the performance is no where near real world.

 

That being said, the performance of the scene work is very well optimized and would obviously lend itself well if used in a real game scenario. So while the benchmark performance is to be taken with a grain of salt, its still a step in the right direction, even if its using traditional methods and not RT cores.

 

Ultimately I think AMD will join the hardware accelerated game when it comes to ray tracing and the competition will simply carry on to evolve that technology. I think standard implementation of Ray Tracing through standard shaders while nice, will ultimately be short lived. I don't think any open source method will really get adopted. I think its it's only going to love forward through RT Core implementation/AMD''s hardware acceleration method.

 

I guess time will tell !

Link to post
Share on other sites

"RDNA2 Navi 23 GPU to Take The Competition to NVIDIA in the High-End Next Year

Based on everything we've been hearing so far Navi 23 is going to be a large GPU that competes in the high-end segment in 2020. It is expected to bring full hardware accelerated ray tracing as well as a plethora of new architectural improvements."

I think this kinda shows AMD isn't sitting on their tails hoping RTX is just a fad.... ;)

Link to post
Share on other sites
"RDNA2 Navi 23 GPU to Take The Competition to NVIDIA in the High-End Next Year

Based on everything we've been hearing so far Navi 23 is going to be a large GPU that competes in the high-end segment in 2020. It is expected to bring full hardware accelerated ray tracing as well as a plethora of new architectural improvements."

I think this kinda shows AMD isn't sitting on their tails hoping RTX is just a fad.... ;)

 

That is the one, I was sure I heard of a new GPU coming out from AMD bringing Ray Tracing into action. Will be interesting to see how they implement it and how it stacks up against Nvidia. Logic and track record dictates to me that AMD's first gen will be dare I say it lackluster in the RT department compared to Nvidia offerings. Not meaning to put a downer on it but I find that AMD GPU's are still always that couple of steps behind. Now if they could bring some of that Ryzen hurt they have been giving to Intel in a form of a GPU to give it to Nvidia...that would be nice. Furthermore it would hopefully bring the market back into pricing balance.

Link to post
Share on other sites

I don't think it would be a big deal if AMDs entry into RTX is lackluster, one could say the exact same about RTX 2xxx as it is anyway.

 

AMD just needs a big gain in straight up raster.

 

Unfortunately, it's become blindingly apparent that cost per transistor is on the rise with every new node that comes along. Everyone is hoping big navi will come in and drop market prices back down to how they were in the Maxwell days but I don't see it happening on the high end right now. Power consumption is also concerning when navi 10 (250mm^2) is pumping out 225~ watts.

 

Exciting times for ray tracing but they won't be budget times :p

Link to post
Share on other sites
I don't think it would be a big deal if AMDs entry into RTX is lackluster, one could say the exact same about RTX 2xxx as it is anyway.

 

AMD just needs a big gain in straight up raster.

 

Unfortunately, it's become blindingly apparent that cost per transistor is on the rise with every new node that comes along. Everyone is hoping big navi will come in and drop market prices back down to how they were in the Maxwell days but I don't see it happening on the high end right now. Power consumption is also concerning when navi 10 (250mm^2) is pumping out 225~ watts.

 

Exciting times for ray tracing but they won't be budget times :p

 

I think that the ray tracing on the current RTX cards is actually rather good assuming that the game has been appropriately developed. The first version of Battlefield that supported RTX was awful because it was rushed in my humble opinion and when they were trying to use ray tracing they were virtually ray tracing everything on scene which is a little un-realistic. I am not saying there is no room for improvement on a hardware level, of course there is and the next generations of GPU's will yield better performance. What I will say is that I have been relatively impressed with the later ray tracing implementations in games, the earlier ones were very hit and miss. I do not think AMD will bring the hurt this time round, and they have a lot to catch up with, not only ray tracing but as you say they need to up their raster performance and of course their VRAM count. I understand they went with HBM/HBM2 but in all honesty what is the point in all that bandwidth if the game actually needs raw space, not speed. Strictly for gaming use, I am still waiting to see a real world advantage to HBM...That or I am just blind lol.

Link to post
Share on other sites
I don't think it would be a big deal if AMDs entry into RTX is lackluster, one could say the exact same about RTX 2xxx as it is anyway.

 

AMD just needs a big gain in straight up raster.

 

Unfortunately, it's become blindingly apparent that cost per transistor is on the rise with every new node that comes along. Everyone is hoping big navi will come in and drop market prices back down to how they were in the Maxwell days but I don't see it happening on the high end right now. Power consumption is also concerning when navi 10 (250mm^2) is pumping out 225~ watts.

 

Exciting times for ray tracing but they won't be budget times :p

 

I think that the ray tracing on the current RTX cards is actually rather good assuming that the game has been appropriately developed. The first version of Battlefield that supported RTX was awful because it was rushed in my humble opinion and when they were trying to use ray tracing they were virtually ray tracing everything on scene which is a little un-realistic. I am not saying there is no room for improvement on a hardware level, of course there is and the next generations of GPU's will yield better performance. What I will say is that I have been relatively impressed with the later ray tracing implementations in games, the earlier ones were very hit and miss. I do not think AMD will bring the hurt this time round, and they have a lot to catch up with, not only ray tracing but as you say they need to up their raster performance and of course their VRAM count. I understand they went with HBM/HBM2 but in all honesty what is the point in all that bandwidth if the game actually needs raw space, not speed. Strictly for gaming use, I am still waiting to see a real world advantage to HBM...That or I am just blind lol.

 

Supposedly the new amd navi 23 will be able to compete with or even beat the 2080/2080ti, BUT that's either THIS generation(the super) or even the first generation cards they're talking about,so it leaves room for nvidia to still be releasing a new generation that'll be wiping the floor with it(sadly). It WOULD be nice to see team green get some real competition to make them treat the consumers right again and make them start sweating a bit.:)

Link to post
Share on other sites
I don't think it would be a big deal if AMDs entry into RTX is lackluster, one could say the exact same about RTX 2xxx as it is anyway.

 

AMD just needs a big gain in straight up raster.

 

Unfortunately, it's become blindingly apparent that cost per transistor is on the rise with every new node that comes along. Everyone is hoping big navi will come in and drop market prices back down to how they were in the Maxwell days but I don't see it happening on the high end right now. Power consumption is also concerning when navi 10 (250mm^2) is pumping out 225~ watts.

 

Exciting times for ray tracing but they won't be budget times :p

 

I think that the ray tracing on the current RTX cards is actually rather good assuming that the game has been appropriately developed. The first version of Battlefield that supported RTX was awful because it was rushed in my humble opinion and when they were trying to use ray tracing they were virtually ray tracing everything on scene which is a little un-realistic. I am not saying there is no room for improvement on a hardware level, of course there is and the next generations of GPU's will yield better performance. What I will say is that I have been relatively impressed with the later ray tracing implementations in games, the earlier ones were very hit and miss. I do not think AMD will bring the hurt this time round, and they have a lot to catch up with, not only ray tracing but as you say they need to up their raster performance and of course their VRAM count. I understand they went with HBM/HBM2 but in all honesty what is the point in all that bandwidth if the game actually needs raw space, not speed. Strictly for gaming use, I am still waiting to see a real world advantage to HBM...That or I am just blind lol.

 

Supposedly the new amd navi 23 will be able to compete with or even beat the 2080/2080ti, BUT that's either THIS generation(the super) or even the first generation cards they're talking about,so it leaves room for nvidia to still be releasing a new generation that'll be wiping the floor with it(sadly). It WOULD be nice to see team green get some real competition to make them treat the consumers right again and make them start sweating a bit.:)

 

Well hopefully they can start amping up their game. Unfortunately the fact they can match or beat the 2080/2080Ti (IF they can) will not be a selling point they want to tout, nothing good by pointing out your new tech can only just match tech that's a 1 Year + Old lol. I just hope their marketing does not go in that direction...I have seen some odd things.

Link to post
Share on other sites
  • 1 month later...
  • 2 weeks later...
I ran a quick one using RhreadRipper and stock card, so I am guessing the score sucks... We need more runs to compare to :D

 

[ATTACH=JSON]{"alt":"Click image for larger version Name:\timage_1022.png Views:\t4 Size:\t1.03 MB ID:\t4076","data-align":"none","data-attachmentid":"4076","data-size":"full"}[/ATTACH]

 

Hang on lol, how is your score like double mine ? Out of interest did you limit your cores in Ryzen Master when running ? @mllrkllr88

Edited by ENTERPRISE
Link to post
Share on other sites
EHW Content Creator

 

Hang on lol, how is your score like double mine ? Out of interest did you limit your cores in Ryzen Master when running ? @mllrkllr88

 

A few things here. Firstly this bench is very inconstant in terms of run-to-run score. Run the bench 5 times and you will see a big variange. I saw a big gain from disabling SMT, so that's something you can try.

 

Link to post
Share on other sites

 

Hang on lol, how is your score like double mine ? Out of interest did you limit your cores in Ryzen Master when running ? @mllrkllr88

 

A few things here. Firstly this bench is very inconstant in terms of run-to-run score. Run the bench 5 times and you will see a big variange. I saw a big gain from disabling SMT, so that's something you can try.

 

I did not think of SMT, Just limited to 8 cores to reduce bottleneck with respect to memory access. Will disable SMT and re-bench 5 times to get an average. Cheers dude.

Link to post
Share on other sites

@mllrkllr88

 

Oddly nothing seems to alter the score so far as CPU/SMT Config. Confident it must be a resolution/GPU thing. Running an Ultrawide at 3440x1440, though SLI is enabled, it only uses one GPU as it is only designed for one GPU, may try disabling SLI to see if that makes a difference.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

This Website may place and access certain Cookies on your computer. ExtremeHW uses Cookies to improve your experience of using the Website and to improve our range of products and services. ExtremeHW has carefully chosen these Cookies and has taken steps to ensure that your privacy is protected and respected at all times. All Cookies used by this Website are used in accordance with current UK and EU Cookie Law. For more information please see our Privacy Policy