Project

General

Profile

action #12524

Buy new hardware for Provo

Added by Anonymous about 5 years ago. Updated almost 5 years ago.

Status:
Closed
Priority:
High
Assignee:
Target version:
-
Start date:
2016-06-28
Due date:
2016-08-31
% Done:

100%

Estimated time:

Description

We want to setup a new infrastructure in the Provo datacenter. That can just be done if we have the needed hardware available.

SUSE sponsors some of the new hardware, so it is time to start thinking about what is needed...

History

#1 Updated by ganglia about 5 years ago

Gerhard has provided a brief spec of hardware. From this, Craig will seek a vendor and a purchasing quote.

3 Controller nodes:

  • 16core, 64GB RAM, 2x 256GB SSD + 2x10G Emulex NIC inc. Service + Emulex (at the moment the FC Cards)

6 Compute nodes:

  • 48core, 512GB RAM, 2x 120GB SSD + 2x10G Emulex NIC inc. Service + Emulex (at the moment the FC Cards)

2 redundant network switches:

  • Cisco 550XG-24F Switch
  • enough cables for all ports and cards

Storage:

  • Dothill (size?)
  • Or perhaps SUSE Enterprise Storage commodity hardware

Closing Issue #12516 from the openSUSE Admin project, which I had already opened for this issue. (Darix had requested that we just use the openSUSE Admin porject rather than creating a new sub-project.)

#2 Updated by ganglia about 5 years ago

  • % Done changed from 0 to 10

I've organized a Sales Call with a system engineer at HPE, to be held on Thursday, 30 Jun.

#3 Updated by AdaLovelace about 5 years ago

How was the Sales Call? Can we get the hardware?

#4 Updated by ganglia about 5 years ago

It will still take some time. HPE sent a preliminary quote yesterday, while I was traveling back to the U.S. I'll be working with them to refine the quote.

Then there will be process of getting approvals.

All taking time and effort. But I'll keep significant updates posted here.

#5 Updated by ganglia about 5 years ago

Working with HPE to refine the quote. Apparently some misunderstandings, which is normal.

#6 Updated by ganglia about 5 years ago

  • % Done changed from 10 to 20

After some back-and-forth discussions today, I now have a usable Quote from HPE. In summary:

  • 3x HP DL380 servers, for controller nodes (1 is an admin node, which is also a hot spare for failure of a controller)
    • 16 cores (2 cpus @ 8 cores) E5-2620v4 2.1GHz Intel Zeon F10
    • 64 GB RAM
    • 2x 240GB SSD drives (sata 6Gb/s)
    • dual (redundant) power supply
    • Smart Array disk controller P440ar/2G
    • 3 years extended support
  • 3x HP DL380 servers, for compute nodes
    • 44 cores (2 cpus @ 22 cores) E5-2699v4 2.2GHz Intel Zeon F10
    • 512 GB RAM
    • 2x 120GB SSD drives (sata 6Gb/s)
    • dual (redundant) power supply
    • Smart Array disk controller P440ar/2G
    • HP FlexFabric 10Gb 2P 556FLR-SFP+ Adptr
    • 3 years extended support
  • HP MSA 2040 SAN
    • 12x 1.8TB 12G SAS 10K 2.5in 512e HDD
    • HPE MSA 2040 1Gb SW iSCSI SFP 4 Pk
    • 3 years extended support
  • 2x FlexFabric 5700-40XG-2QSFP+ Switches
    • 4x Network cable - SFP+ - 3.3 ft

This does not include other network cables needed for the servers.

Current quote prices matches very closely to our budget, just with about $400 to spare.

Our purchasing agent will now try to reduce the costs, which may allow us to make some small adjustments or additions.

#7 Updated by AdaLovelace about 5 years ago

ganglia wrote:

Current quote prices matches very closely to our budget, just with about $400 to spare.

Our purchasing agent will now try to reduce the costs, which may allow us to make some small adjustments or additions.

Did the Sales Manager know that SUSE is working with HPE together and that would be for a Linux project?
That can be a reason for getting a little bit discount and HPE wants to support Linux projects. :-)

#8 Updated by gschlotter about 5 years ago

on question Craig,
maybe I do not understand something right, but are the SFPs for the 1GB link included or do you plan an additional switch for 1GB Ethernet?

#9 Updated by ganglia about 5 years ago

Hi Gerhard,

If you'll look back at the original proposal that you provided to me, you did not specify any switch for ethernet; you specified 2 redundant switches. Without any other information, I simply have asked HPE to provide a quote for the 2 redundant switches, and asked whether the switches would support convergent networks.

If you want me to buy something different than this, please specify.

#10 Updated by gschlotter about 5 years ago

back than the two redundant switches were planned for storage and ethernet, if the switches you have in the offer are going to be able to run both I am completely fine.

#11 Updated by AdaLovelace about 5 years ago

4x Network cable (SFP+) are more expensive than normal cables. That's something where you can save the missing money. The same is with switches. Network cards for SFP+ are more expensive.
We can use them, but normal cables and switches can do their work, too. ^

#12 Updated by ganglia about 5 years ago

The 4x SFP+ network cables are necessary for the storage network. The only way that you can get reliable 10G speeds for the storage I/O is through SFP+ (or fiber).

#13 Updated by ganglia about 5 years ago

I've just had my latest update conversation with Heather (our purchasing agent). This is progressing nicely. We're closing in on a deal with the vendor that should provide extra compute server that will fit within our budget. She's working on getting the P.O. issued today, and attempting to get all approval signatures by tomorrow (Friday). This is an attempt to get the best deal possible. We're not 100% sure we can get all the approvals in time, but she's going to try.

#14 Updated by ganglia about 5 years ago

  • % Done changed from 20 to 30

P.O. has been created. The purchase is now just waiting for various high-level approvals, including Roland, Ralf, and Nils.

Beyond that, there are still 2 others at Microfocus who need to approve. Our purchasing agent is working to ensure that those 2 others will be inclined to approve.

#15 Updated by ganglia about 5 years ago

  • % Done changed from 30 to 50

All approvals have been given. (Special Note: This may be the fastest that an order of this size has been approved. Remarkable thanks to Heather.)

The hardware is being ordered today. Hardware could start arriving within 2 weeks. We'll think about getting it during the first week of August.

#16 Updated by ganglia about 5 years ago

I have started to receive hardware. At this point only one of the switches has arrived. More due to arrive next week, though I don't have firm dates for the arrival of each of the components.

#17 Updated by AdaLovelace about 5 years ago

Write into this ticket when all will be arrived. Close it then and change to https://progress.opensuse.org/issues/12528 or assign it to any admin with access to the datacenter.
Keep it up!

I'll organize a meeting on IRC for the next steps.

#18 Updated by AdaLovelace about 5 years ago

You didn't come to our meeting.
What is the status of the hardware?

#19 Updated by Anonymous about 5 years ago

-- deleted --

#20 Updated by ganglia about 5 years ago

  • % Done changed from 50 to 70

Hardware arrived last week. Craig and Robert (more Robert than Craig) worked on installing the hardware in the racks.

I found out a small mistake with our order that we are working to fix. But it's nothing that will prevent us from moving forward. We currently have 3 admin nodes and 3 compute nodes, as noted in comment #6 above. The mistake involves trying to get a 4th compute node. I'm working on getting that resolved as quickly as possible to increase our quantity of compute nodes.

Another small issue is that we need more, longer SFP+ cables to connect the compute nodes to the storage switch. That should be resolved rather soon.

Lastly, I'm still working with Micro Focus IT on the details of the configuration of network topology.

#21 Updated by AdaLovelace about 5 years ago

Have you got news about the 4th node and the network?

#22 Updated by ganglia about 5 years ago

Sadly we can't afford to update the 4th node. The node will be re-deployed elsewhere in our infrastructure where its configuration can be used.

I've been out for a week. Prior to my short vacation, I had been working out some more details with the NicroFocus Network Infrastructure Team. We worked through some confusion about our topology and proposed network infrastructure, then ordered some new network transceivers for our switches that will hook into their network. We're waiting for the transceivers to arrive from the vendor.

#23 Updated by AdaLovelace about 5 years ago

This ticket is for buying the hardware. The ticket #12528 is for organizing the network infrastructure.
Close this ticket, if all hardware is bought and available. Tell us about problems and the status with the network in https://progress.opensuse.org/issues/12528 .
Thanks!

#24 Updated by ganglia about 5 years ago

Today I was delighted to have a package delivered to me, which I expected to contain the final hardware (cables). I was sorely disappointed that the package did not contain all of the things I ordered. I'm waiting to hear from the vendor, to know if they messed up, or if the rest of my order is still "in the mail."

#25 Updated by ganglia about 5 years ago

  • % Done changed from 70 to 90

I received the remaining cables from the vendor. I'll be working with Robert to get the cabling setup. And then we can focus on the network configuration.

#26 Updated by AdaLovelace almost 5 years ago

What is the status here? Should we finish all after Christmas?
How long do you need for the ticket?

#27 Updated by ganglia almost 5 years ago

  • Assignee changed from ganglia to rwawrig

This ticket has been delayed again due to a larger delay in related projects involving Microfocus IT. Mars and Robert and Craig all have a meeting with Microfocus IT on Wednesday, 2 Nov. We believe that we'll be able to make progress again as a result of that meeting.

#28 Updated by cboltz almost 5 years ago

Can you give us a summary of the meeting results and tell us when the new hardware will be ready, please?

Also, you probably know that we'll have a Heroes meeting on Dec 2-4 (= in a month), and I'd like to setup (at least) a staging wiki on the new hardware on that weekend.

#29 Updated by AdaLovelace almost 5 years ago

Hi Robert,

how was the meeting and the result? What is the status of the network architecture or where are problems at the moment? We want to setup the system.

The hardware is available and unused hardware creates costs. Wiki issues won't be finished fast enough, cause of the admins and we don't have access...

Best regards,
Sarah

#30 Updated by rwawrig almost 5 years ago

The result of the meeting was that MF-IT will look again over our requests and come with a solution till the end of the month. However, is the same response we got since end of summer, when the hardware arrived.
Roland is here this week and he just escalated today the the situation to MF-IT directors. Waiting for our network connection issue to be prioritized on MF side.

#31 Updated by AdaLovelace almost 5 years ago

What was the result of the escalation?

#32 Updated by rwawrig almost 5 years ago

We decided to have an internet connection via the MF Lab network instead of their production network, as they are missing the needed hardware in Production and there is no ETA when they will get it. As the office is closed starting from today until Monday (Thanksgiving holiday) we expect to have the connection in the beginning of next week.

#33 Updated by AdaLovelace almost 5 years ago

Can we watch the ticket planning status in Provo anywhere?
We want to know what will be done this week. :-)

#34 Updated by rwawrig almost 5 years ago

I'm not aware of any extra ticket for this.
The current status is that we got internet connection yesterday and did the basic installation of the operating system on all machines. Next step is the planning and deployment of the cloud setup.

#35 Updated by cboltz almost 5 years ago

Thanks for the update and for doing the basic setup!

Our admin workshop (including some community members) in Nurenberg will start tomorrow (= in about 12 hours) which will mostly be a hands-on with a focus on cloud etc. running on the new hardware. It would be very nice to have a working cloud setup until then ;-))

So - unless you have a very important task on your desk, please do the cloud setup now. ;-)

#36 Updated by rwawrig almost 5 years ago

Regarding the cloud configuration, unfortunately the team in Nuremberg has to get their hands on this too and now is midnight over there.

#37 Updated by AdaLovelace almost 5 years ago

  • Status changed from In Progress to Closed
  • % Done changed from 90 to 100

We can close this ticket and go to the next step.
Thanks, Robert! :)

Also available in: Atom PDF