Pete Dowson

DXGI_ERROR_DEVICE_HUNG error: Is there a solution?

Recommended Posts

4 minutes ago, Pete Dowson said:

From that 'analysis' it must surely be down to nVidia, messing up their drivers, not P3D? I don't really see how P3D could cause a GPU to report timeouts, hangs, or anything driver/hardware related. It's at the wrong level in the Windows architecture. The most it could do is put a severe enough load on it to show up failures elsewhere.

Continually looking for improved performance, I have updated drivers regularly (though a bit behind at present -- I have 416.22 downloaded but not yet installed).  But I only install the proper WHQL ones.

Pete

 

 

try the 397.64 drivers, I bet you a discounted virgin mates rates ticket in upper.... you wont get the issue, im that confident.

Edited by tooting

Share this post


Link to post
Help AVSIM continue to serve you!
Please donate today!

8 minutes ago, tooting said:

Please explain how this cant be an issue with P3D then.

Do some basic internet searches, you'll see a very common solution ... but don't let that stop you from jumping to conclusions.  The more the app/game/sim makes use of VRAM the higher the probability it will expose hardware issues ... P3D can use A LOT of VRAM and can stress a GPU (SSAA will do that), TEXTURE_SIZE_EXP=9 or 10 will use anywhere from 4GB to 10+GB VRAM.

Cheers, Rob.

  • Like 1

Share this post


Link to post
18 minutes ago, tooting said:

i dont get this error with any other software I use.  only P3D. 

Ditto.

Share this post


Link to post
3 minutes ago, Rob Ainscough said:

Do some basic internet searches, you'll see a very common solution ... but don't let that stop you from jumping to conclusions.  The more the app/game/sim makes use of VRAM the higher the probability it will expose hardware issues ... P3D can use A LOT of VRAM and can stress a GPU (SSAA will do that), TEXTURE_SIZE_EXP=9 or 10 will use anywhere from 4GB to 10+GB VRAM.

Cheers, Rob.

then why using exp 9 or 10 work fine with ZERO issues with drivers before  397.64  and not with drivers after then

Share this post


Link to post
4 hours ago, cj-ibbotson said:

Sadly my own card is just outside the warranty (if it was just 12 months) I've had crashes since June or July but was convinced my card wasn't faulty as so many experience these crashes. I also did not want to be without a GPU for a month waiting for tests etc.

I might review running a log like you did and post results. Only thing is I can't sit in front of pc for hrs whilst it's testing so unsure at which point the log is applicable as I wouldn't know the exact time of the crash unless it happens whilst I'm present. Any tips?

Chris

Hi Chris

I could be wrong but I thought you swapped cards a few weeks ago with your son and demonstrated the issue was your card, perhaps I have you confused with someone else.

Regardless I assume you tried the 397.64 drivers as stated by someone else above as "the solution" and it didn't solve the problem. BTW it didn't solve any of my friends problem with this issue.

As I stated previously in this forum only a new card fixed the issue for a couple of my friends. IF there are emerging card issues that would explain why this issue DOES show up across several software games and sims as I have also seen in my research, not just P3D.

I look forward to Pete's conclusion when he reactivates the card he disabled to see if indeed his issue was caused by his card as well.

Joe

Share this post


Link to post
13 minutes ago, tooting said:

try the 397.64 drivers, I bet you a discounted virgin mates rates ticket in upper.... you wont get the issue, im that confident

Then you should be aware that a specific driver version was also suggested by the many threads on the internet with other games/sims/apps and in many cases that did NOT solve the problem.  But either way, you've just admitted it's an nVidia problem by suggesting it's driver specific ... sooooo, you're back at square one.  Suggest you bring this up with nVidia.

In the meantime, for those seeking solutions that are outside of RMA:

1.  Try underclocking GPU/Memory
2.  Don't use TEXTURE_SIZE_EXP=9 or 10 (this also means uncheck High Res terrian in P3D) this will reduce VRAM usage
3.  If SLI, disable it and use a signal GPU, if problem still persists, remove a GPU and see if it only happens with a specific GPU
4.  Buy another GPU, keep the receipt, test with it, if the problem still remains you can return the GPU for money back
5.  Contact nVidia support

OR, just keep blaming LM and get nowhere

Cheers, Rob.

Share this post


Link to post

my money is still on a conflict between p3d and drivers after 397.64

Share this post


Link to post
2 minutes ago, tooting said:

my money is still on a conflict between p3d and drivers after 397.64

I've had the issue far before these drivers were released.

Share this post


Link to post
19 minutes ago, tooting said:

try the 397.64 drivers, I bet you a discounted virgin mates rates ticket in upper.... you wont get the issue, im that confident.

I don't think I can use such old drivers on this hardware. And I no longer have the issue, have found the problem and fixed it.

Pete

Share this post


Link to post
Just now, Pete Dowson said:

I don't think I can use such old drivers on this hardware. And I no longer have the issue, have found the problem and fixed it.

Pete

to save me trawling up the page , how did you fix it please

Share this post


Link to post

A little update..I tried 397.64 drivers after the DDU in safe mode, but now I am getting 100% GPU usage and a stuttering slide show, even at very low settings and the default piper cub. I have never hit 100% GPU usage before with my high settings. I also deleted shaders, and the p3d.cfg, no change. I haven't done anything to the clocks yet and I'm not wanting to touch those until I can at least get the sim running smooth again (frame rate wise) but seems I am just making things worse. Before I was never below 25fps in hard areas like LAX/SFO. This is frustrating because I loved what I saw with 4.4 and the only flight I did (LAS-SLC) ended with a HUNG error which seems to happen in the SLC valley, didn't have it happen for me anywhere else when jumping to different airports. And now the stuttering slideshow has me baffled. A lame question but how do I know if I have a factory overclocked card? I will be doing a complete uninstall and fresh install after Christmas as I'm getting a new (larger) SSD for the sim.

Edited by pvupilot

Share this post


Link to post
4 minutes ago, tooting said:

to save me trawling up the page , how did you fix it please

First tested with SLI disabled. Result: okay!
So then eliminated one the three GPUs n my 3-way SLI, setting 2-way.  Also okay!

One more test to do (tomorrow) -- use the eliminated GPU and exclude the other, also in a 2-way SLI.

If that works as well, then I know it's my 3-way configuration which was responsible (though why is another matter). If that doesn't work it points to the originally excluded card being faulty.

Pete

 

Share this post


Link to post
2 hours ago, tooting said:

then why using exp 9 or 10 work fine with ZERO issues with drivers before  397.64  and not with drivers after then

The theory you have to discount is that something in the older drivers isn't working the card as hard as the newer drivers either in memory use or heat. Since the card is not working as hard, it doesn't corrupt the memory and there isn't a DXGI hung error.

I got the dreaded DXGI_ERROR_DEVICE_HUNG error on a brand new 2080 today. It was in VR at night in the cockpit of the Flight1 King Air. At the time I was wondering why my memory usage was 7GB out of 8GB because I was at only one regional orbx airport and the card load was ~85%.

The card is factory overclocked.

I did not reboot and just restarted P3D then did a six hour flight in the day with scattered clouds not a problem but the 2080 was only loaded to 50-60% usage most of the time and memory usage was lower because after takeoff there was no more orbx scenery in the flight plan except for the region at FL290.

I doubt it is an P3D problem.

Share this post


Link to post
On 12/4/2018 at 9:10 PM, w6kd said:

There's another TDR setting you can try in addition to TdrDelay, which is TdrDdiDelay (a DWORD value created the same way and in the same registry location as TdrDelay).  The default is 5, I have been using 20 for years, along with 10 for TdrDelay.  The TdrDdiDelay sets the amount of time a thread can leave the driver.  Note that if you set TdrLevel to 0, none of these delay settings are used--they're meaningless--as the detection of driver errors by the TDR code is disabled.

I tried both of these today using the 32bit DWord, I returned my GPU's power level to 100% (always crashes at this) and those registry settings did nothing to help, the sim crashed with a DEVICE_HUNG error within 5 mins.  I will delete them and try the QWord version

Chris

Share this post


Link to post
4 hours ago, joepoway said:

Hi Chris

I could be wrong but I thought you swapped cards a few weeks ago with your son and demonstrated the issue was your card, perhaps I have you confused with someone else.

Regardless I assume you tried the 397.64 drivers as stated by someone else above as "the solution" and it didn't solve the problem. BTW it didn't solve any of my friends problem with this issue.

As I stated previously in this forum only a new card fixed the issue for a couple of my friends. IF there are emerging card issues that would explain why this issue DOES show up across several software games and sims as I have also seen in my research, not just P3D.

I look forward to Pete's conclusion when he reactivates the card he disabled to see if indeed his issue was caused by his card as well.

Joe

Hi Joe, yes I tried my sons 1080 (non ti) and it appeared to work for 6 hours, he had to return to university so I could only test one afternoon.  Given some have problems and many dont I thought perhaps his card was one of the safe ones.  I actually came close to swapping him my £720 card for his £400 card.  I tried basically every driver back to 382 or something, Ive deleted the downloaded drivers now as none helped.  I dont know what to do, you can barely get 1080ti cards now, the UK's biggest computer retailer where I bought my own sells a few for £900, £200 more than I paid for mine, and they are lower spec.  I doubt if I managed to convince them my card is 'faulty' they would exchange.  I probably dont even have a warranty anymore with them or nVidia, no idea what my rights are and given Christmas is coming up Id be without a pc for at least a month..possibly too for shipping both ways and testing.

4 hours ago, tooting said:

try the 397.64 drivers, I bet you a discounted virgin mates rates ticket in upper.... you wont get the issue, im that confident.

All drivers from current back to something like 382.xx failed for me.  I tried a good 18 of them back as far as those which were compatible with my 1080ti.  These crashes have been reported since v3.2 or 3.4 dating back 2 years on LM forums and are well before the 1080ti came out.  Users experience these crashes even on AMD cards.

 

4 hours ago, Rob Ainscough said:

Do some basic internet searches, you'll see a very common solution ... but don't let that stop you from jumping to conclusions.  The more the app/game/sim makes use of VRAM the higher the probability it will expose hardware issues ... P3D can use A LOT of VRAM and can stress a GPU (SSAA will do that), TEXTURE_SIZE_EXP=9 or 10 will use anywhere from 4GB to 10+GB VRAM.

Cheers, Rob.

Rob during many of my tests over the last few months I got DXGI crashes on a fresh Windows 10 install and a default sim with default settings so there was basically no load on my card.  Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?  My card passes every other game at 4K and benchmarks and stress testing utilities without a hiccup.  Ive had crashes for the past 6 months and sadly my card is now past its 12 month warranty.  I never sent it back as I would assume the shop would run the same type of stress tests and just return the card to me.  

Chris

Edited by cj-ibbotson

Share this post


Link to post
46 minutes ago, cj-ibbotson said:

Hi Joe, yes I tried my sons 1080 (non ti) and it appeared to work for 6 hours, he had to return to university so I could only test one afternoon.  Given some have problems and many dont I thought perhaps his card was one of the safe ones.  I actually came close to swapping him my £720 card for his £400 card.  I tried basically every driver back to 382 or something, Ive deleted the downloaded drivers now as none helped.  I dont know what to do, you can barely get 1080ti cards now, the UK's biggest computer retailer where I bought my own sells a few for £900, £200 more than I paid for mine, and they are lower spec.  I doubt if I managed to convince them my card is 'faulty' they would exchange.  I probably dont even have a warranty anymore with them or nVidia, no idea what my rights are and given Christmas is coming up Id be without a pc for at least a month..possibly too for shipping both ways and testing.

All drivers from current back to something like 382.xx failed for me.  I tried a good 18 of them back as far as those which were compatible with my 1080ti.  These crashes have been reported since v3.2 or 3.4 dating back 2 years on LM forums and are well before the 1080ti came out.  Users experience these crashes even on AMD cards.

 

Rob during many of my tests over the last few months I got DXGI crashes on a fresh Windows 10 install and a default sim with default settings so there was basically no load on my card.  Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?  My card passes every other game at 4K and benchmarks and stress testing utilities without a hiccup.  Ive had crashes for the past 6 months and sadly my card is now past its 12 month warranty.  I never sent it back as I would assume the shop would run the same type of stress tests and just return the card to me.  

Chris

I feel for you Chris. A friend of mine has an Gigabyte Arorus 1080ti with a 3 or 4 year warranty. He was having the same issues as you we swapped cards all was well so he contacted the vendor and received a new card in a week or so and all is good for now, this just happened last month and his card was about 6 months old.

Joe

Share this post


Link to post
6 minutes ago, joepoway said:

I feel for you Chris. A friend of mine has an Gigabyte Arorus 1080ti with a 3 or 4 year warranty. He was having the same issues as you we swapped cards all was well so he contacted the vendor and received a new card in a week or so and all is good for now, this just happened last month and his card was about 6 months old.

Joe

I'll send Overclockers.co UK an email. They do have the Arorus in stock but it's a slower version of my card I think. I paid £720 for mine a year ago but all £899 on their website. What did your friend say given that many cards perform fine except for the dxgi crashes in P3d?

Chris

Edited by cj-ibbotson

Share this post


Link to post
1 hour ago, cj-ibbotson said:

Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side (this was also the case with one of my 2080Ti I recently changed to waterblock setup).  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

EDIT: Some additional diagnostics you can do:

1.  Use DDU in Safe Mode to uninstall the driver as if for a NEW GPU install, remove the GPU and move it to another free PCIe X16 slot.
2.  For any file corruptions you can run CMD (Run As Admin) "SFC /scannow" and see if it reports anything, if it does check the CBS log file for reported errors
3.  Run DISM /Online /Cleanup-Image /CheckHealth, and DISM /Online /Cleanup-Image /ScanHealth, and DISM /Online /Cleanup-Image /RestoreHealth

I doubt this three items will solve your issue, but they are good to run regardless and they will find OS issues.

  • Like 2

Share this post


Link to post
13 minutes ago, Rob Ainscough said:

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side.  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

 

Although I think you are right your reasoning is still not water tight (pun). What if P3D is over-driving the GPU beyond specification and that somewhere in the driver specs it says that it is not the driver or GPU's responsibility to handle it? So if that is true, P3D exceeds the GPU spec, the GPU crashes and P3D handles the crash.

It would then still be P3D's fault wouldn't it?

It would be sad engineering practice I admit but is such a scenario possible perhaps via a bug in the GPU driver that allows P3D to exceed spec?

Share this post


Link to post
1 hour ago, Rob Ainscough said:

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side (this was also the case with one of my 2080Ti I recently changed to waterblock setup).  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

EDIT: Some additional diagnostics you can do:

1.  Use DDU in Safe Mode to uninstall the driver as if for a NEW GPU install, remove the GPU and move it to another free PCIe X16 slot.
2.  For any file corruptions you can run CMD (Run As Admin) "SFC /scannow" and see if it reports anything, if it does check the CBS log file for reported errors
3.  Run DISM /Online /Cleanup-Image /CheckHealth, and DISM /Online /Cleanup-Image /ScanHealth, and DISM /Online /Cleanup-Image /RestoreHealth

I doubt this three items will solve your issue, but they are good to run regardless and they will find OS issues.

Thanks for this info Rob.  I am not happy taking it apart.  The retailer I purchased it from (Overclockers) literally just over one year ago no longer sells them but their other 10xx.xx gpus and 20xx range from Inno3D come with 2 or 3 yr warranties.  Inno3D isnt listed on the direct to manufacter list which is worrying as the list is very long.  I  would have more confidence Inno could diagnose a fault better than the shop whose terms state if they cannot find one they will return the card and charge for carriage.  I do not see any UK contact details for Inno which maybe the reason that they cant deal with warranties.  

I have always used DDU when installing the many many drivers I have tested.  I am currently using the latest.  No single driver has seen any beneficial results.  I also recently did  "SFC /scannow" as I got a couple of Kernalbase.dll crashes and Microsoft advised one poster to try the scan amongst other things. It did find a few errors and repair them.  I ran the scan a 2nd time and no errors.  I havent tried those other tests given that Ive had these crashes on a brand new installation with no other software installed.  Also been running a dual boot test with 2 versions of Windows 10, one on an SSD and another on a Velociraptor drive and both gave DXGI crashes so its not a drive error issue.  I got the card at the end of Nov 2017 but only started seeing DXGI crashes around June or July, pretty much not long after v4.3 came out.  Before that I didnt use the sim for long periods so unsure if the error always existed as it can take moments to many hours to crash.

42 minutes ago, glider1 said:

Although I think you are right your reasoning is still not water tight (pun). What if P3D is over-driving the GPU beyond specification and that somewhere in the driver specs it says that it is not the driver or GPU's responsibility to handle it? So if that is true, P3D exceeds the GPU spec, the GPU crashes and P3D handles the crash.

It would then still be P3D's fault wouldn't it?

It would be sad engineering practice I admit but is such a scenario possible perhaps via a bug in the GPU driver that allows P3D to exceed spec?

This is something I keep thinking, mainly given the card shows no other signs of faults and performs well on everything else, no apparent heat issues either and Ive gotten error on an entirely default and fresh W10 installation with a default sim and no add ons and no overclocks of any sort.  Perhaps most cards dont have an issue but some cards, as all cards I guess are different like cpus, get tripped up by something the sim is trying to do..the card or driver stops then P3D shows the error.

Im for contacting the vendor I bought it from and see what options I have. 

EDIT - just reading Overclockers forums regarding Inno3D and after goods are RMA'd to Overclockers they assess them and contact Inno3D, if they accept it is to be RMA'd to them for repair it has to then go to HONG KONG 😞 Takes 4-6 weeks

Chris

Edited by cj-ibbotson

Share this post


Link to post
6 minutes ago, glider1 said:

What if P3D is over-driving the GPU beyond specification

Like a tight shader loop not hitting it's precision point for exit?  Would that trigger a device hung - I'm not sure it would?  Or a race condition ... if it did, then it would most certainly trigger excessive heat and that would show up in data logs and throttling would happen ... per the logs provided early GPU temps were not a problem.  Memory timing errors would surface regardless of heat (within reason) ... that's almost random draw of allocation and hitting the bad memory chip or address range where the weak transistors are located.

Anything is possible, but what were trying to accomplish is what's the most "likely" source.  Because someone claims it only happens with P3D doesn't mean much, in fact, if you google search the DEVICE HUNG problem just about every person who reports the problems says exactly the same thing "only happens with Battlefield IV", "only happens with CoD", "only happens with Tomb Raider", "only happens with ... fill in your software of choice".

Cheers, Rob.

  

 

Share this post


Link to post
3 hours ago, cj-ibbotson said:

I tried both of these today using the 32bit DWord, I returned my GPU's power level to 100% (always crashes at this) and those registry settings did nothing to help, the sim crashed with a DEVICE_HUNG error within 5 mins.  I will delete them and try the QWord version

Microsoft's docs say it's a DWORD value.

https://docs.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys

 

Share this post


Link to post
4 hours ago, cj-ibbotson said:

I'll send Overclockers.co UK an email. They do have the Arorus in stock but it's a slower version of my card I think. I paid £720 for mine a year ago but all £899 on their website. What did your friend say given that many cards perform fine except for the dxgi crashes in P3d?

Chris

He basically explained that the GPU based error kept happening more frequently and when he swapped out the card for another everything worked well and he could light switch the issue. They didn’t disput anything and he had to “ buy” the replacement and we they received the returned GPU they refunded his money.

Good Luck

Joe

  • Like 1

Share this post


Link to post
1 hour ago, joepoway said:

He basically explained that the GPU based error kept happening more frequently and when he swapped out the card for another everything worked well and he could light switch the issue. They didn’t disput anything and he had to “ buy” the replacement and we they received the returned GPU they refunded his money.

Good Luck

Joe

The thing there is that the replacement card might still suffer the same fate it is just a matter of time if it turns out that something in the software chain stresses the GPU and degrades it over time above what it can handle long term either by a violation of software specification or unofficially by a bad board level hardware design or bad chip level design.

I hear people always referring to 1000,2000 series cards and not 900 series where this happens. Is that true?

If it is, we have a big clue.

A big lead in this would be to research whether the bug also happens in 900 series or earlier GPU's. From word of mouth it does not. If it does not, why not in earlier series? What would P3D do differently on 1000,2000 GPU series that it doesn't do on 900 series GPU's? Even dynamic lighting can be done by a 900 series GPU can't it?

If the device hung bug does happen in 900 series cards as well, then you can rule out hardware design as a cause of this crash because it is too unlikely to span three generations of cards.

If it only happens on 1000,2000 series cards, I think it is unlikely that P3D with its old code base would treat 900 series differently to 1000,2000 series which would suggest that it is very likely a hardware level problem in the 1000,2000 series, EDIT or a driver level problem in the 1000,2000 series. But a driver level problem is unlikely, because P3D would be using the same function calls which would exist across all three generations of GPU driver.

 

Edited by glider1

Share this post


Link to post

I don't think that there is one single cause.

As an example, I was plagued with this error but "luckily" it developed into

P3D v4 simply closing Windows down.

I took a leap of faith and replaced my very good Corsair power supply.

Corsair were good enough to send me a new one, replacing like with like.

Since then there has not been one repetition, so the cause was a weakness in the

power supply, presumably exposed by P3D v4 which is the most demanding software

that is installed.

In the light of the above post, the graphics card was a GTX 970

but of course, it was not the cause of the problem, in this specific case.

 

Edited by nolonger

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now