Jump to content
Sign in to follow this  
Branimir

DXGI_ERROR_DEVICE_HUNG error: Is there a solution?

Recommended Posts

On 12/4/2018 at 9:10 PM, w6kd said:

There's another TDR setting you can try in addition to TdrDelay, which is TdrDdiDelay (a DWORD value created the same way and in the same registry location as TdrDelay).  The default is 5, I have been using 20 for years, along with 10 for TdrDelay.  The TdrDdiDelay sets the amount of time a thread can leave the driver.  Note that if you set TdrLevel to 0, none of these delay settings are used--they're meaningless--as the detection of driver errors by the TDR code is disabled.

I tried both of these today using the 32bit DWord, I returned my GPU's power level to 100% (always crashes at this) and those registry settings did nothing to help, the sim crashed with a DEVICE_HUNG error within 5 mins.  I will delete them and try the QWord version

Chris


800driver.jpg

 

Chris Ibbotson

Share this post


Link to post
Share on other sites
4 hours ago, joepoway said:

Hi Chris

I could be wrong but I thought you swapped cards a few weeks ago with your son and demonstrated the issue was your card, perhaps I have you confused with someone else.

Regardless I assume you tried the 397.64 drivers as stated by someone else above as "the solution" and it didn't solve the problem. BTW it didn't solve any of my friends problem with this issue.

As I stated previously in this forum only a new card fixed the issue for a couple of my friends. IF there are emerging card issues that would explain why this issue DOES show up across several software games and sims as I have also seen in my research, not just P3D.

I look forward to Pete's conclusion when he reactivates the card he disabled to see if indeed his issue was caused by his card as well.

Joe

Hi Joe, yes I tried my sons 1080 (non ti) and it appeared to work for 6 hours, he had to return to university so I could only test one afternoon.  Given some have problems and many dont I thought perhaps his card was one of the safe ones.  I actually came close to swapping him my £720 card for his £400 card.  I tried basically every driver back to 382 or something, Ive deleted the downloaded drivers now as none helped.  I dont know what to do, you can barely get 1080ti cards now, the UK's biggest computer retailer where I bought my own sells a few for £900, £200 more than I paid for mine, and they are lower spec.  I doubt if I managed to convince them my card is 'faulty' they would exchange.  I probably dont even have a warranty anymore with them or nVidia, no idea what my rights are and given Christmas is coming up Id be without a pc for at least a month..possibly too for shipping both ways and testing.

4 hours ago, tooting said:

try the 397.64 drivers, I bet you a discounted virgin mates rates ticket in upper.... you wont get the issue, im that confident.

All drivers from current back to something like 382.xx failed for me.  I tried a good 18 of them back as far as those which were compatible with my 1080ti.  These crashes have been reported since v3.2 or 3.4 dating back 2 years on LM forums and are well before the 1080ti came out.  Users experience these crashes even on AMD cards.

 

4 hours ago, Rob Ainscough said:

Do some basic internet searches, you'll see a very common solution ... but don't let that stop you from jumping to conclusions.  The more the app/game/sim makes use of VRAM the higher the probability it will expose hardware issues ... P3D can use A LOT of VRAM and can stress a GPU (SSAA will do that), TEXTURE_SIZE_EXP=9 or 10 will use anywhere from 4GB to 10+GB VRAM.

Cheers, Rob.

Rob during many of my tests over the last few months I got DXGI crashes on a fresh Windows 10 install and a default sim with default settings so there was basically no load on my card.  Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?  My card passes every other game at 4K and benchmarks and stress testing utilities without a hiccup.  Ive had crashes for the past 6 months and sadly my card is now past its 12 month warranty.  I never sent it back as I would assume the shop would run the same type of stress tests and just return the card to me.  

Chris

Edited by cj-ibbotson

800driver.jpg

 

Chris Ibbotson

Share this post


Link to post
Share on other sites
46 minutes ago, cj-ibbotson said:

Hi Joe, yes I tried my sons 1080 (non ti) and it appeared to work for 6 hours, he had to return to university so I could only test one afternoon.  Given some have problems and many dont I thought perhaps his card was one of the safe ones.  I actually came close to swapping him my £720 card for his £400 card.  I tried basically every driver back to 382 or something, Ive deleted the downloaded drivers now as none helped.  I dont know what to do, you can barely get 1080ti cards now, the UK's biggest computer retailer where I bought my own sells a few for £900, £200 more than I paid for mine, and they are lower spec.  I doubt if I managed to convince them my card is 'faulty' they would exchange.  I probably dont even have a warranty anymore with them or nVidia, no idea what my rights are and given Christmas is coming up Id be without a pc for at least a month..possibly too for shipping both ways and testing.

All drivers from current back to something like 382.xx failed for me.  I tried a good 18 of them back as far as those which were compatible with my 1080ti.  These crashes have been reported since v3.2 or 3.4 dating back 2 years on LM forums and are well before the 1080ti came out.  Users experience these crashes even on AMD cards.

 

Rob during many of my tests over the last few months I got DXGI crashes on a fresh Windows 10 install and a default sim with default settings so there was basically no load on my card.  Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?  My card passes every other game at 4K and benchmarks and stress testing utilities without a hiccup.  Ive had crashes for the past 6 months and sadly my card is now past its 12 month warranty.  I never sent it back as I would assume the shop would run the same type of stress tests and just return the card to me.  

Chris

I feel for you Chris. A friend of mine has an Gigabyte Arorus 1080ti with a 3 or 4 year warranty. He was having the same issues as you we swapped cards all was well so he contacted the vendor and received a new card in a week or so and all is good for now, this just happened last month and his card was about 6 months old.

Joe


Joe (Southern California)

SystemI9-9900KS @5.1Ghz/ Corsair H115i / Gigabyte A-390 Master / EVGA RTX 2080 Ti FTW3 Hybrid w 11Gb / Trident 32Gb DDR4-3200 C14 / Evo 970 2Tb M.2 / Samsung 40inch TV 40ku6300 4K w/ Native 30 hz capability  / Corsair AX850 PS / VKB Gunfighter Pro / Virpil MongoosT-50 Throttle / MFG Crosswind Pedals /   LINDA, VoiceAttack, ChasePlane, AIG AI, MCE, FFTF, Pilot2ATC, HP Reverb G2

Share this post


Link to post
Share on other sites
6 minutes ago, joepoway said:

I feel for you Chris. A friend of mine has an Gigabyte Arorus 1080ti with a 3 or 4 year warranty. He was having the same issues as you we swapped cards all was well so he contacted the vendor and received a new card in a week or so and all is good for now, this just happened last month and his card was about 6 months old.

Joe

I'll send Overclockers.co UK an email. They do have the Arorus in stock but it's a slower version of my card I think. I paid £720 for mine a year ago but all £899 on their website. What did your friend say given that many cards perform fine except for the dxgi crashes in P3d?

Chris

Edited by cj-ibbotson

800driver.jpg

 

Chris Ibbotson

Share this post


Link to post
Share on other sites
Guest
1 hour ago, cj-ibbotson said:

Could there not be something in the coding (Direct X or something) of P3D and some other titles which cannot handle some minor glitch in many users cards?

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side (this was also the case with one of my 2080Ti I recently changed to waterblock setup).  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

EDIT: Some additional diagnostics you can do:

1.  Use DDU in Safe Mode to uninstall the driver as if for a NEW GPU install, remove the GPU and move it to another free PCIe X16 slot.
2.  For any file corruptions you can run CMD (Run As Admin) "SFC /scannow" and see if it reports anything, if it does check the CBS log file for reported errors
3.  Run DISM /Online /Cleanup-Image /CheckHealth, and DISM /Online /Cleanup-Image /ScanHealth, and DISM /Online /Cleanup-Image /RestoreHealth

I doubt this three items will solve your issue, but they are good to run regardless and they will find OS issues.

Share this post


Link to post
Share on other sites
13 minutes ago, Rob Ainscough said:

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side.  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

 

Although I think you are right your reasoning is still not water tight (pun). What if P3D is over-driving the GPU beyond specification and that somewhere in the driver specs it says that it is not the driver or GPU's responsibility to handle it? So if that is true, P3D exceeds the GPU spec, the GPU crashes and P3D handles the crash.

It would then still be P3D's fault wouldn't it?

It would be sad engineering practice I admit but is such a scenario possible perhaps via a bug in the GPU driver that allows P3D to exceed spec?

Share this post


Link to post
Share on other sites
1 hour ago, Rob Ainscough said:

Sure apps/software can trigger CTDs, freezes, etc. ... but the DEVICE HUNG exception is being "trapped" or more accurately "handled" ... you can see it's a P3D dialog/window displaying the DEVICE HUNG message, that IS a P3D Window which means exception processing in P3D caught the exception and P3D code is still active and running.  That exception would be relayed thru DX11 API which would be working with the nVidia driver ... so this means the application code (P3D) is working as it should and trapping the device error.

There are really only two possibilities:

1.  Corrupted or faulty nVidia driver

2.  Component failure in hardware that could be intermittent ... given the data log provided early pgde here: 

It looks to me like a memory timing failure (VRAM is just faster RAM and subject to all the same timing issues as regular RAM) ... but I'm not an nVidia engineer so they would be the ones to best comment.  Also, another factor in your case is that you downgraded to a lesser GPU with less power requirements as a "test" to see if the problem would trigger and the problem did NOT trigger.

If you're comfortable with pulling apart your GPU, you can remove cooler but do it VERY slowly and check the heat transfer pads are correctly placed on the appropriate components of the GPU (usually memory chips and the VRMs).  I've taken apart many GPUs over the years as I always replace the air coolers with water blocks and I've seen some pretty poor quality control on placement of these heat transfer pads, some not even on the VRM or memory chip and skewed off to the side (this was also the case with one of my 2080Ti I recently changed to waterblock setup).  I've also see very sloppy application of GPU thermal paste (3X the quantity needed and overflowing all over the place).  This is a "free" option if you're comfortable taking these GPU's apart ... just some tiny screws ...  you can use an EK waterblock installation manual if you need a guide of what screws to remove here: https://www.ekwb.com/shop/EK-IM/EK-IM-3830046994912.pdf

Cheers, Rob.

EDIT: Some additional diagnostics you can do:

1.  Use DDU in Safe Mode to uninstall the driver as if for a NEW GPU install, remove the GPU and move it to another free PCIe X16 slot.
2.  For any file corruptions you can run CMD (Run As Admin) "SFC /scannow" and see if it reports anything, if it does check the CBS log file for reported errors
3.  Run DISM /Online /Cleanup-Image /CheckHealth, and DISM /Online /Cleanup-Image /ScanHealth, and DISM /Online /Cleanup-Image /RestoreHealth

I doubt this three items will solve your issue, but they are good to run regardless and they will find OS issues.

Thanks for this info Rob.  I am not happy taking it apart.  The retailer I purchased it from (Overclockers) literally just over one year ago no longer sells them but their other 10xx.xx gpus and 20xx range from Inno3D come with 2 or 3 yr warranties.  Inno3D isnt listed on the direct to manufacter list which is worrying as the list is very long.  I  would have more confidence Inno could diagnose a fault better than the shop whose terms state if they cannot find one they will return the card and charge for carriage.  I do not see any UK contact details for Inno which maybe the reason that they cant deal with warranties.  

I have always used DDU when installing the many many drivers I have tested.  I am currently using the latest.  No single driver has seen any beneficial results.  I also recently did  "SFC /scannow" as I got a couple of Kernalbase.dll crashes and Microsoft advised one poster to try the scan amongst other things. It did find a few errors and repair them.  I ran the scan a 2nd time and no errors.  I havent tried those other tests given that Ive had these crashes on a brand new installation with no other software installed.  Also been running a dual boot test with 2 versions of Windows 10, one on an SSD and another on a Velociraptor drive and both gave DXGI crashes so its not a drive error issue.  I got the card at the end of Nov 2017 but only started seeing DXGI crashes around June or July, pretty much not long after v4.3 came out.  Before that I didnt use the sim for long periods so unsure if the error always existed as it can take moments to many hours to crash.

42 minutes ago, glider1 said:

Although I think you are right your reasoning is still not water tight (pun). What if P3D is over-driving the GPU beyond specification and that somewhere in the driver specs it says that it is not the driver or GPU's responsibility to handle it? So if that is true, P3D exceeds the GPU spec, the GPU crashes and P3D handles the crash.

It would then still be P3D's fault wouldn't it?

It would be sad engineering practice I admit but is such a scenario possible perhaps via a bug in the GPU driver that allows P3D to exceed spec?

This is something I keep thinking, mainly given the card shows no other signs of faults and performs well on everything else, no apparent heat issues either and Ive gotten error on an entirely default and fresh W10 installation with a default sim and no add ons and no overclocks of any sort.  Perhaps most cards dont have an issue but some cards, as all cards I guess are different like cpus, get tripped up by something the sim is trying to do..the card or driver stops then P3D shows the error.

Im for contacting the vendor I bought it from and see what options I have. 

EDIT - just reading Overclockers forums regarding Inno3D and after goods are RMA'd to Overclockers they assess them and contact Inno3D, if they accept it is to be RMA'd to them for repair it has to then go to HONG KONG 😞 Takes 4-6 weeks

Chris

Edited by cj-ibbotson

800driver.jpg

 

Chris Ibbotson

Share this post


Link to post
Share on other sites
Guest
6 minutes ago, glider1 said:

What if P3D is over-driving the GPU beyond specification

Like a tight shader loop not hitting it's precision point for exit?  Would that trigger a device hung - I'm not sure it would?  Or a race condition ... if it did, then it would most certainly trigger excessive heat and that would show up in data logs and throttling would happen ... per the logs provided early GPU temps were not a problem.  Memory timing errors would surface regardless of heat (within reason) ... that's almost random draw of allocation and hitting the bad memory chip or address range where the weak transistors are located.

Anything is possible, but what were trying to accomplish is what's the most "likely" source.  Because someone claims it only happens with P3D doesn't mean much, in fact, if you google search the DEVICE HUNG problem just about every person who reports the problems says exactly the same thing "only happens with Battlefield IV", "only happens with CoD", "only happens with Tomb Raider", "only happens with ... fill in your software of choice".

Cheers, Rob.

  

 

Share this post


Link to post
Share on other sites
3 hours ago, cj-ibbotson said:

I tried both of these today using the 32bit DWord, I returned my GPU's power level to 100% (always crashes at this) and those registry settings did nothing to help, the sim crashed with a DEVICE_HUNG error within 5 mins.  I will delete them and try the QWord version

Microsoft's docs say it's a DWORD value.

https://docs.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys

 


Bob Scott | President and CEO, AVSIM Inc
ATP Gulfstream II-III-IV-V

System1 (P3Dv5/v4): i9-13900KS @ 6.0GHz, water 2x360mm, ASUS Z790 Hero, 32GB GSkill 7800MHz CAS36, ASUS RTX4090
Samsung 55" JS8500 4K TV@30Hz,
3x 2TB WD SN850X 1x 4TB Crucial P3 M.2 NVME SSD, EVGA 1600T2 PSU, 1.2Gbps internet
Fiber link to Yamaha RX-V467 Home Theater Receiver, Polk/Klipsch 6" bookshelf speakers, Polk 12" subwoofer, 12.9" iPad Pro
PFC yoke/throttle quad/pedals with custom Hall sensor retrofit, Thermaltake View 71 case, Stream Deck XL button box

Sys2 (MSFS/XPlane): i9-10900K @ 5.1GHz, 32GB 3600/15, nVidia RTX4090FE, Alienware AW3821DW 38" 21:9 GSync, EVGA 1000P2
Thrustmaster TCA Boeing Yoke, TCA Airbus Sidestick, 2x TCA Airbus Throttle quads, PFC Cirrus Pedals, Coolermaster HAF932 case

Portable Sys3 (P3Dv4/FSX/DCS): i9-9900K @ 5.0 Ghz, Noctua NH-D15, 32GB 3200/16, EVGA RTX3090, Dell S2417DG 24" GSync
Corsair RM850x PSU, TM TCA Officer Pack, Saitek combat pedals, TM Warthog HOTAS, Coolermaster HAF XB case

Share this post


Link to post
Share on other sites
4 hours ago, cj-ibbotson said:

I'll send Overclockers.co UK an email. They do have the Arorus in stock but it's a slower version of my card I think. I paid £720 for mine a year ago but all £899 on their website. What did your friend say given that many cards perform fine except for the dxgi crashes in P3d?

Chris

He basically explained that the GPU based error kept happening more frequently and when he swapped out the card for another everything worked well and he could light switch the issue. They didn’t disput anything and he had to “ buy” the replacement and we they received the returned GPU they refunded his money.

Good Luck

Joe

  • Like 1

Joe (Southern California)

SystemI9-9900KS @5.1Ghz/ Corsair H115i / Gigabyte A-390 Master / EVGA RTX 2080 Ti FTW3 Hybrid w 11Gb / Trident 32Gb DDR4-3200 C14 / Evo 970 2Tb M.2 / Samsung 40inch TV 40ku6300 4K w/ Native 30 hz capability  / Corsair AX850 PS / VKB Gunfighter Pro / Virpil MongoosT-50 Throttle / MFG Crosswind Pedals /   LINDA, VoiceAttack, ChasePlane, AIG AI, MCE, FFTF, Pilot2ATC, HP Reverb G2

Share this post


Link to post
Share on other sites
1 hour ago, joepoway said:

He basically explained that the GPU based error kept happening more frequently and when he swapped out the card for another everything worked well and he could light switch the issue. They didn’t disput anything and he had to “ buy” the replacement and we they received the returned GPU they refunded his money.

Good Luck

Joe

The thing there is that the replacement card might still suffer the same fate it is just a matter of time if it turns out that something in the software chain stresses the GPU and degrades it over time above what it can handle long term either by a violation of software specification or unofficially by a bad board level hardware design or bad chip level design.

I hear people always referring to 1000,2000 series cards and not 900 series where this happens. Is that true?

If it is, we have a big clue.

A big lead in this would be to research whether the bug also happens in 900 series or earlier GPU's. From word of mouth it does not. If it does not, why not in earlier series? What would P3D do differently on 1000,2000 GPU series that it doesn't do on 900 series GPU's? Even dynamic lighting can be done by a 900 series GPU can't it?

If the device hung bug does happen in 900 series cards as well, then you can rule out hardware design as a cause of this crash because it is too unlikely to span three generations of cards.

If it only happens on 1000,2000 series cards, I think it is unlikely that P3D with its old code base would treat 900 series differently to 1000,2000 series which would suggest that it is very likely a hardware level problem in the 1000,2000 series, EDIT or a driver level problem in the 1000,2000 series. But a driver level problem is unlikely, because P3D would be using the same function calls which would exist across all three generations of GPU driver.

 

Edited by glider1

Share this post


Link to post
Share on other sites

I don't think that there is one single cause.

As an example, I was plagued with this error but "luckily" it developed into

P3D v4 simply closing Windows down.

I took a leap of faith and replaced my very good Corsair power supply.

Corsair were good enough to send me a new one, replacing like with like.

Since then there has not been one repetition, so the cause was a weakness in the

power supply, presumably exposed by P3D v4 which is the most demanding software

that is installed.

In the light of the above post, the graphics card was a GTX 970

but of course, it was not the cause of the problem, in this specific case.

 

Edited by nolonger

Share this post


Link to post
Share on other sites

Well, I tried 2-way SLI with the 1st and 2nd GPU (the 2nd was the suspect), and, whilst it worked fine with a less dense scenery (EGCC), it crashed with a DXGI device hung error as soon as I loaded EGLL. This was day time! In my 3-way arrangement that only happened at night (unless FTX England was enabled).

So, either the middle card is faulty, or the slot it is in is bad for SLI. I could try swapping it into the 3rd slot and re-testing, but I have other things to do today and over the weekend, so I'll leave that test till next week.

Pete

 


Win10: 22H2 19045.2728
CPU: 9900KS at 5.5GHz
Memory: 32Gb at 3800 MHz.
GPU:  RTX 24Gb Titan
2 x 2160p projectors at 25Hz onto 200 FOV curved screen

Share this post


Link to post
Share on other sites
8 hours ago, w6kd said:

I've seen differing posts saying to try the 64 bit keyword given the drivers and OS are 64 bit but neither versions have worked for me sadly.  I had my Power Limit of the card at 80%, a little higher than previous tests, and it crashed after an hour or two.  Set it to 75% before going to bed and the sim was still running 8 hours later so the power is causing some sort of issue.  Adjusting the Power Limit also adjusts the temp limit I think though the card has never reached the temperature limit if set at 100% which is to crash P3D

 

6 hours ago, joepoway said:

He basically explained that the GPU based error kept happening more frequently and when he swapped out the card for another everything worked well and he could light switch the issue. They didn’t disput anything and he had to “ buy” the replacement and we they received the returned GPU they refunded his money.

Good Luck

Joe

I seriously do not want to buy another card as Im reading on the forums of the vendor I bought the card from that Inno3D can easily refuse to honour a warranty.  One user found his card made a hissing noise then a pop then failure.  He returned it to Overclockers.co.uk who then contacted Inno3D and sent them photos.  Inno3D refused to allow an RMA and said the card was damaged by the user. He states it was no such thing and the photos show shroud damage which he believes either happened in transit or the retailer dropped the card.  Cards also have to be returned to flipping Hong Kong for repair which I seriously do not want to do.  Its worrying so many cards showing these errors when you google it, even many with the latest 2080's

 

5 hours ago, glider1 said:

I hear people always referring to 1000,2000 series cards and not 900 series where this happens. Is that true?

If it is, we have a big clue.

A big lead in this would be to research whether the bug also happens in 900 series or earlier GPU's. From word of mouth it does not. If it does not, why not in earlier series? What would P3D do differently on 1000,2000 GPU series that it doesn't do on 900 series GPU's? Even dynamic lighting can be done by a 900 series GPU can't it?

If the device hung bug does happen in 900 series cards as well, then you can rule out hardware design as a cause of this crash because it is too unlikely to span three generations of cards.

If it only happens on 1000,2000 series cards, I think it is unlikely that P3D with its old code base would treat 900 series differently to 1000,2000 series which would suggest that it is very likely a hardware level problem in the 1000,2000 series, EDIT or a driver level problem in the 1000,2000 series. But a driver level problem is unlikely, because P3D would be using the same function calls which would exist across all three generations of GPU driver.

 I did some googling this morning to try and find the early post on Lockheed Martins Forums dating back 2 years ago but their search facility keeps throwing up an error.  Other Google results did bring up a simmer using P3D and a 700 series card.  I got my 1080ti on 28th Nov 2017 but did not experience DXGI crashes until June or July 2018.  Ive wiped my system many times and gone back to old OS builds and old drivers and it still crashes as I immediately thought a driver or Windows update around June July may have triggered it but obviously not.

Chris


800driver.jpg

 

Chris Ibbotson

Share this post


Link to post
Share on other sites

For those who have RMA'd their 1080ti due to this problem and received a replacement, has this solved the problem or does it reoccur?

Thanks and Happy Holidays!

  • Upvote 1

Gigabyte x670 Aorus Elite AX MB; AMD 7800X3D CPU; Deepcool LT520 AIO Cooler; 64 Gb G.Skill Trident Z5 NEO DDR5 6000; Win11 Pro; P3D V5.4; 1 Samsung 990 2Tb NVMe SSD: 1 Crucial 4Tb MX500 SATA SSD; 1 Samsung 860 1Tb SSD; Gigabyte Aorus Extreme 1080ti 11Gb VRAM; Toshiba 43" LED TV @ 4k; Honeycomb Bravo.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...