View Single Post
      01-03-2020, 05:16 PM   #95
AlpineWhite_SJ
Banned
United_States
1578
Rep
1,024
Posts

Drives: 2018 F80 M3 ZCP, 2020 F97 X3MC
Join Date: Sep 2017
Location: Bay Area, CA

iTrader: (0)

Quote:
Originally Posted by zx10guy View Post
As an experiment, I installed the workhorse of business class 10Gig NICs (Intel X520) into my gaming PC running Windows 8.1 (now upgraded to Windows 10) to see if it would work. To my surprise it did without having to play driver roulette.

Some of the activities I do which is more about my IT hobby and work is moving around large ISO and OVA/OVF files building out new virtual machines in my vSphere cluster. Being able move around/load those files with 10Gig connectivity saves tons of time.

In addition to that, my MD3800i iSCSI array requires 10Gig connectivity. I haven't leveraged this but will when I get my 2 Cisco UCS240 servers up and running is FCoE...more specifically leveraging the features of DCB (data center bridging).

Also, I have a few connected devices running on my PoE access switch where that traffic gets distributed to other parts of my network. If I ran that through even bonded 1Gig links, performance would be worse than running over a single 10Gig link. I have 10Gig running between this switch and the top of rack switch in my server rack.

Truth be told, the server I run 24/7 is my physical host for about 24 VMs of which about 8 VMs run 24/7. Because of the traffic going in and out of these VMs, the server is 40Gig attached to the top of rack switch.
Going to run VGPU in those C240s?
Appreciate 0