Jump to content
Sign in to follow this  
mabe54

Intel Larrabee project

Recommended Posts

Also, I do not get Phil Taylor's blog from Intel anymore. Di they cancel the whole project???Just Curious, :( MAB

Share this post


Link to post
Also, I do not get Phil Taylor's blog from Intel anymore. Di they cancel the whole project???Just Curious, :( MAB
Yup, canned...

Share this post


Link to post
I thought so. Wow. Thanks.CheersMAB
No, it definitely not canned, he just can't post anymore in fear of leaking vital information on the project. He just posted here on the subject...http://forums1.avsim.net/index.php?showtopic=271391

Share this post


Link to post

Too much time wasted in singular processing instead of paralleling one. No way Intel could have caught up a lost decade in ruffly two years. But Larrabee ideals is the way to go but in due time. Entry point has to be as good or better than the competition and half the price. Just to get the foot at the door of the GPU's. Hardware is tough but software will be the kicker. The other two can get a breather and get their act together cause one way or another Larrabee will prevail in whatever form or shape it will end up taken. Cheers,MABPS: Nice to hear that Mr. Phil Taylor is still in the thick of it. Given them hell if they don't listen. :( :( :(

Share this post


Link to post
No, it definitely not canned, he just can't post anymore in fear of leaking vital information on the project. He just posted here on the subject...http://forums1.avsim.net/index.php?showtopic=271391
yes,LRB and I are still alive.its true, the consumer discrete graphics card based on the LRB1 architecture is cancelled. there may be a High-Performance Compute ( HPC ) product based on LRB1, thats still TBD. That would essentially be an add-on card that is a compute brick.and we are heads-down planning LRB2 to be a better LRB1.the basic idea behind LRB is that the GPU has been getting increasingly general, starting with DX9 SM2.0 class products. SM 2.0 enabled quite a bit. as did SM3.0. DX10 and SM 4.0 extended that further. This lead nVidia to release CUDA in 2007 to allow developers to write parallel programs to be executed on DX10 class GPUs,albeit in a subset of the C language and with some heavy memory access ( read-modify-write ) restrictions. Now DX11 in 2009 has "Compute Shader" to allow the graphics programmer access to the same functionality, to write math programs for physics, advanced character animation, special effects, etc just like they write graphics programs to draw triangles, light and texture them, etc. OpenCL is the same thing on the "Open" API side.Clearly the direction is towards ever more general programming on the GPU. The GPU today is a Single-Instruction-Multiple-Data (SIMD) architecture because vertex processing with the 4-tuple XYZW vector and pixel processing with the 4-tuple RGBA color are essentially parallel operation of a single operation on multiple data elements. Elements of MIMD ( multiple instruction, multiple data ) are starting to show up in the hw now.LRB takes this to its logical conclusion and is a GPU that uses X86 cores with a full hw memory hierarchy ( L1+L2+main memory) + some special processing units ( vector unit, texture unit, etc ) and lets the programmer write C code ( albeit parallelized C code ) and have generalized memory access. Thus removing the need for the general purpose APIs ( DX, OGL ) and the little languages that are shader models and compute shaders. The goal is to enable classes of algorithms that cannot yet be performed efficiently on todays GPU, with their architectures that are, for the main, focused on graphics processing and the SIMD programming style those APIs enable. And given the programmer familiarity with x86 and the tools and code out there, x86 has advantages that outweigh disadvantages.Recognizing that almost no one will use the lowest level of LRB programming until market share is gained, the standard APIs will be available. But the ultimate in power is writing to what we call LRB-native and having all the cores and all the threads at your disposal.It was a tough call to defer the 1st iteration of the graphics product, but it was the right call.

Share this post


Link to post

Question Phil, How would the graphics themselves be coded, for instance, how would things that were in GLSL or HLSL be transferred over?

Share this post


Link to post
Question Phil, How would the graphics themselves be coded, for instance, how would things that were in GLSL or HLSL be transferred over?
there are a couple of different programming modes, one of which is bog-standard API mode, with both DX and OGL drivers provided.in the case of "to the metal" programming, yes some facility would be provided for conversion.basically the programmer gets to decide how deep to dive into the LRB pool, its not forced onto you.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  
  • Tom Allensworth,
    Founder of AVSIM Online


  • Flight Simulation's Premier Resource!

    AVSIM is a free service to the flight simulation community. AVSIM is staffed completely by volunteers and all funds donated to AVSIM go directly back to supporting the community. Your donation here helps to pay our bandwidth costs, emergency funding, and other general costs that crop up from time to time. Thank you for your support!

    Click here for more information and to see all donations year to date.
×
×
  • Create New...