Tip:
Highlight text to annotate it
X
Alright, so you probably heard about this on the blogs,
you've heard it on a bunch of cool websites. This is a very interesting product.
And it's really important for a lot of people out there.
It gonna save a lot of big companies, a lot of government firms, a lot of money.
This is the Nvidia Tesla C1060 Computing Processor.
And what the Tesla does, is it gives computing scientists and researchers,
it gives them the power of a small to a medium sized super computing cluster,
but in a work station form factor.
Like, this is just a video card, it looks like a video card.
It will fit in your computer but it gives you the power of like a room of rack systems.
Now, what this does is it pretty much enhances GPU and parallel processing.
It puts it in a small form factor, with a PCI express 2.0 bus,
so you can throw it in a work station and have this immense amount of power.
It actually can do over a teraflop of raw processing power.
And it introduces something new, which is called the Double Precision Floating Point.
Which is something that's kind of new. It still does the single, obviously,
but it does the doubles. Now, what that means depends on what you're gonna use it for.
Something like this is gonna be used for doing oil and gas exploration,
computational finance, fluid dynamics, medical research, weather modeling.
I mean, this is not for regular people.
They're gonna be using that and what makes the whole thing possible,
is that you are utilizing Nvidia's CUDA API, which is their C programming language.
It has a great SDK on the web so you can download to write custom programs for this type of hardware
Pretty much rather than using your CPU or a cluster of CPUs,
to do these intensely parallel tasks, you use parallel processors.
And this has 240 stream processors in there or Ale use.
So they're gonna do everything really fast, and all at the same time, parallel.
And if you don't really understand the concept of parallel, this is not a card for you.
But, if you are just curious and watching this video,
at Envision 08, the guys from MythBusters did a great demonstration.
Watch it and it will give you the idea of how parallel computing works.
Now, beside the 240 stream processors at 1300 MHz,
it's got 4 GB of GDDR3 at 800 MHz.
It's got a 512 MB of memory interface. It gives you easily over 100 GB per second
of memory bandwidth.
Now, to power this thing, you're gonna need at least 2, 6 pins PCI,
which you can do 1 here, and 1, I guess block out one of these, 1 there.
Or you can just do an 8. If you just do an 8, you're fine.
But if not, you need to do 2 6s.
So don't think you get to do the 8 and the 6, you gotta do one or the other.
Now, those 240 stream processors that are all up in here, they are literally very, very powerful.
They are the 10 series stream processors from Nvidia. They have a floating point unit,
they have a logic unit, they have move and compare unit, and they have a branch unit.
Also on here, is a thread manager. I'm actually not sure if it's here or here.
But there is a thread manager that can support up to 12,000 threads with 0 overheads
for thread switching. So you can manage massive data sets.
You also have 4 GB of memory on board. So, huge, massive data sets
are gonna be compiled through here.
If you look over here, obviously, this is just a computational unit,
so there's not gonna be any graphics slots on here, it's not what this is for.
And this is your PCI express 2.0 bus, down here.
Now, if you wanna know, Can I use a single Y PCI express? An X16 instead of an X16 2.0?
Yes you can, but you are gonna lose some of that bandwidth,
and you kinda don't wanna do that when you're trying to get massive amounts of computational
power over 1 teraflop per card, so keep that in mind.
You do need a good amount of power. This will draw about 150 Watts,
so it's really nice if you would have about a 600 Watt power supply just to run 1 of these.
But, the best part about the Tesla C1060 Computing Processor is that with this type of technology,
and the right kind of motherboard and the right kind of system,
you can built yourself a super computer that would fit inside a regular computer case.
And I'm not even kidding. Here's how you do it.
First thing you need is the right motherboard. You need a motherboard that supports
your Optrons or Xenon processors, or, now a days, you can use their Core I 7,
which is the latest platform, which is awesome. Very, very fast.
What's most important though, is you need to have at least a minimum
of 3 or 4 PCI express 2.0 slots. This is PCI express 2.0 and it works best.
Now these have 4 GB of memory each, you need an additional 4 GB of system memory per card.
What you're gonna do, is you're gonna get an awesome motherboard,
like a SkullTrail that has 4 PCI express 2.0 slots, or a Core I 7 motherboard
that's really high end. Especially one that has work station stuff like SAS drives on it.
And then you're gonna get a minimum of 12 GB of FB dims or triple channel DDR3,
whatever it requires, depending on your board. And you're gonna get 3 of these Tesla cards.
You're gonna load up all 3 of them, and these things will actually scale fully.
And it's not like SLI or anything with computers and gaming.
We're talking about full, all out scaling 1 to 1 ratio.
You also get... Your final card will be either a quardro FX card for 3D modeling,
or a quardro CX for editing. An NVS, if you need to set up a bunch of monitors,
and pretty much make sure you case expansion slots,
because these are gonna take 2, 2, 2 and then your card, Your graphics.
Your work station card and then a PS uses ports probably 1200-1500 Watts,
Because you are gonna use it. If you are to use all of that together,
you load up Windows XP 64 bit, or, Lynx 64 bit.
And eventually they're gonna have one for Vista and Windows 7 in the future but that's not out yet
So it's either XP or Lynx for now, and you literally build a super computer.
A room of racks of CPUs, you're building it in a case.
You put some SAS drives in there, you get some big monitors,
and you have an amazing work station for a scientist, for a researcher, for medicine.
You can do so much with it, and the best part is, since you can build a system like that for $10,000
versus a 5 million dollar super computer, you can actually give all the people in your staff
one of these work stations. And they are gonna have the power of your cluster, at their desk.
And they don't have to share the cluster anymore.
So now, if they have a problem, and they need to figure out, they can simply insert
their calculations and do whatever they need to do with their models,
and just shoot it right at their desk. They don't need to wait scheduling
cluster time for processing.
And it's actually gonna be much faster than some of the 5-10 million dollar
super computers that are out there.
So it just goes to show you how immensely powerful the Nvidia CUTA platform is,
how powerful GPU processing is, and this is stuff that is right now for scientists,
but it's also moving very mainstream in the future, you're gonna see it
with stuff like quardro CX cards from Nvidia. They do the same thing, but for home systems.
And they're meant for CS4, for doing video editing. And for PhotoShop stuff.
It's using GPU power to processing stuff that is better if you do it parallel.
And it gives you a huge boost of performance, but this is on a whole another level.
This is for scientists and computing researchers and all these crazy people doing all these stuff.
Computational finances, that I really don't understand.
But it's a great product, definitely look into it for your company.
And definitely consider it if you're doing a lot of research that requires
this type of parallel processing.
If you have any questions on it, don't send them to me, send them to Nvidia.
Because I don't know that much about this product. And... Goodbey.
So, if you're a scientist or a researcher, and you want to get some more information
on the Nvidia Tesla C1060 Computing Processor, type in P56-1060
into the search engine of any of these major retailers.
For computertv, I'm Albert.
(C) 2008 SYX Services, Inc. All Rights Reserved Channel: TigerDirectBlog