Home > Preview
The flashcards below were created by user
on FreezingBlue Flashcards.
CMC- READ AND WRITE BUFFER
- -MANAMENT INTERFACE
- SEEPRON- TO FAIL OVER (PRIMARY AND SUBORDINATE)
- -USCDIRECT - ALLOWS YOU TO CONTROL THE NETWORK
- EXAMPLE: JAVA RUNTIME ENVIROMENT CREATE A SAAS
Ciso project California
Stateless computing Model
- stateful- service profiles
- - VIRTUAL SERVER MAKING HAL A LIAR
- - SERVICE PROFILES IS AN OBJECT
- - 1 TO 1 RELATIONSHIP
- - TROUBLESHOOT BE ABLE TO MOVE THE VCPU
Carve out space
- - file access cannot boot
- - block level MBR
- - allow to boot with INAS
- ACL- Access list
- operates on a whitelist
- we can control access
LINK AGGREGATION PROTOCOL
VIRTUAL IP ADDRESS
9100, 9200 9500
- -5k - ETHNET FCOENATIVE FC ISCSI
CREATE A BOX THAT DOES IT ALL
- APPLICATION SPECIFIED INTERGRATED CIRCUIT
- -MINI COMPUTER
CISCO POWER TOOLS
THIRD PARTY TOOL
- IOM - INPUT OUTPUT MODULE
- 2204- 2K GENERATION LAST TWO UPLINKS
- 2- 2K
- 2- GENERATION
- 04- TWO UPLINK PORTS
16 10GE BASE BACK PLANE INTERFACE
32 10GE BASE
-EHM/EHV IT WILL BUILD PATHS TO PINNED PORTS
6100 FABRIC INTER CONNECT
HARDWARE ABSTRACTION LAYER
- - PROTECT MEMORY LINKAGE
- - KERNEL 0 MEMORY
- show cluster state
- show cluster extended state
- show fex
- show fex detials
- Fiber interconnects
- Port Role
- -service ports
port Channel -pin to port channel
- host interface
- Network interface
Changing Lead FI
YOU HAVE TO BE ON THE SUBORDINATE FI
LAYER 2 TRUNK TO THE LAN
Three steps TO SETTING UP STORAGE
INIATORS TO TARGET
ACTIVE ZONE SET
NODE PROXY ID VIRTUAL
- FLOGI- LOGIN
- - PORT 3 GOTO
- PLOGI - DERIVED
- EHM SAN
- FC EHM
- NPIV- ENABLE TO ALLOW MULTIPT PORTS
LOGICAL ADDRESS FCIP
- - OX- DID
- -AA- AREA
- -BCC PORT
- SAN = FSPF
PRIMARY VLAN FOR THE UCS SERIES
COMPUTE CONFIGE AND DISCOVERY
NETWORK CONFIG PACKECT WALK
- LAYER 2 IS VLAN
- LAYER 3 IS ROUTING (VRG)
- do eVERYTHING IN PAIRS INCASE ONE PATH FAILS
- CREATE VLANS GLOBALLY
is the core technology that powers the new Cisco UCS solution
is an IT vendor best known for the hyper-converged storage product OmniCube. The OmniCube is a 2U converged storage box that contains PCI Express flash cards and hard disk drives. It uses SimpliVity's OmniStack technology, which allows for features such as deduplication and compression.
Hyper-converged storage is a software-defined approach to storage management that combines storage, compute, networking and virtualization technologies in one physical unit that is managed as a single system.
At its simplest definition, data deduplication refers to a technique for eliminating redundant data in a data set. In the process of deduplication, extra copies of the same data are deleted, leaving only one copy to be stored. Data is analyzed to identify duplicate byte patterns to ensure the single instance is indeed the single file. Then, duplicates are replaced with a reference that points to the stored chunk.
Data compression is particularly useful in communications because it enables devices to transmit or store the same amount of data in fewer bits. There are a variety of data compression techniques, but only a few have been standardized. The CCITT has defined a standard data compression technique for transmitting faxes (Group 3 standard) and a compression standard for data communications through modems (CCITT V.42bis). In addition, there are file compression formats, such as ARC and ZIP.
Cisco Unified Computing System
is a data center computing solution, unifying, computing, networking, management, virtualization and storage access.
The system eliminates the limitations of fixed I/O configurations with an I/O architecture that can be changed through software on a per-server basis to provide needed connectivity using a just-in-time deployment model
Cisco UCS 5100 Series Blade Server Chassis
Cisco's first blade-server chassis offering, the Cisco UCS 5108 Blade Server Chassis, is six rack units (6RU) high, can mount in an industry-standard 19-inch rack, and uses standard front-to-back cooling. A chassis can accommodate up to eight half-width, or four full-width Cisco UCS B-Series Blade Servers form factors within the same chassis.
- The Cisco UCS 5108 Blade Server Chassis revolutionizes the use and deployment of blade-based systems. By incorporating unified fabric and fabric-extender technology, the Cisco Unified Computing System enables the chassis to:
- Have fewer physical components
- Require no independent management
- Be more energy efficient than traditional blade-server chassis
This simplicity eliminates the need for dedicated chassis management and blade switches, reduces cabling, and allowing scalability to 20 chassis without adding complexity. The Cisco UCS 5108 Blade Server Chassis is a critical component in delivering the simplicity and IT responsiveness for the data center as part of the Cisco Unified Computing System.
Cisco UCS B200 M5 Blade Server
The Cisco UCS B200 M5 server is a half-width blade. Up to eight servers can reside in the 6-Rack-Unit (6RU) Cisco UCS 5108 Blade Server Chassis, offering one of the highest densities of servers per rack unit of blade chassis in the industry. You can configure the B200 M5 to meet your local storage requirements without having to buy, power, and cool components that you do not need.The B200 M5 provides these main features:● Up to two Intel Xeon Scalable CPUs with up to 28 cores per CPU● 24 DIMM slots for industry-standard DDR4 memory at speeds up to 2666 MHz, with up to 3 TB of total memory when using 128-GB DIMMs● Modular LAN On Motherboard (mLOM) card with Cisco UCS Virtual Interface Card (VIC) 1340, a 2-port, 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE)–capable mLOM mezzanine adapter● Optional rear mezzanine VIC with two 40-Gbps unified I/O ports or two sets of 4 x 10-Gbps unified I/O ports, delivering 80 Gbps to the server; adapts to either 10- or 40-Gbps fabric connections● Two optional, hot-pluggable, Hard-Disk Drives (HDDs), Solid-State Disks (SSDs), or NVMe 2.5-inch drives with a choice of enterprise-class RAID or passthrough controllers● Cisco FlexStorage local drive storage subsystem, which provides flexible boot and local storage capabilities and allows you to boot from dual, mirrored SD cards● Support for up to two optional GPUs● Support for up to one rear storage mezzanine card
Cisco UCS Manager
Cisco Single Connect
Graphic Processing Unit
A GPU, or graphics processing unit, is used primarily for 3-D applications. It is a single-chip processor that creates lighting effects and transforms objects every time a 3D scene is redrawn. These are mathematically-intensive tasks, which otherwise, would put quite a strain on the CPU.
Cabling Considerations for Fabric Port Channels
Cabling Considerations for Fabric Port Channels
When you configure the links between the Cisco UCS 2200 Series FEX and a Cisco UCS 6200 series fabric interconnect in fabric port channel mode, the available virtual interface namespace (VIF) on the adapter varies depending on where the FEX uplinks are connected to the fabric interconnect ports.
Inside the 6248 fabric interconnect there are six sets of eight contiguous ports, with each set of ports managed by a single chip. When all uplinks from an FEX are connected to a set of ports managed by a single chip, Cisco UCS Manager maximizes the number of VIFs used in service profiles deployed on the blades in the chassis. If uplink connections from an IOM are distributed across ports managed by separate chips, the VIF count is decreased. Figure 16. Port Groups for Fabric Port Channels
Unlike the Cisco UCS Fabric Interconnect and the Virtual Interface Card (Palo) that each have (8) COS-based queues, the FEX has (4) queues, of which only (3) are used. One FEX queue is used for strict priority Control traffic for the FI to manage the FEX and adapters. The second FEX queue is for No Drop traffic classes such as FCoE. The third FEX queue is used for Drop classes (all the other stuff). While each queue independently empties traffic in the order it was received (FIFO), the No Drop queue carrying FCoE is FIFO as well but is serviced for transmission on the wire with a guaranteed bandwidth weighting.
One could look at that and say: between the FI, FEX, and Adapter, the FEX is the odd device out sitting in the middle with inconsistent QoS capabilities from other two, creating a “hole” or “discontinuity” in the Cisco UCS end-to-end QoS capabilities. That’s a fair observation to make.
However, before we stop here, there is one very interesting and unique behavior the FEX exhibits that’s entirely applicable to this conversation:
When the FEX gets congested on any interface (facing the FI or Adapters), it will push that congestion back to the source, rather than dropping the traffic. The FEX does this for both the Drop and No Drop traffic classes. The FEX will send 802.1Qbb PFC pause messages to the FI and NIV capable adapters (such as Menlo or Palo). For non-NIV capable adapters such as the standard Intel Oplin, the FEX will send a standard 802.3X pause message.
At this point its up to the device receiving the pause message to react to it by allowing its buffers to fill up and apply its more intelligent QoS scheduling scheme from there. For example, both the Fabric Interconnect and Palo adapter would treat the pause message as if its own link was congested and apply the QoS bandwidth policy defined in the “QoS System Class” settings in UCS Manager.
Side note: The Gen2 Emulex and Qlogic adapters are NIV capable, however they do not honor the PFC pause messages sent by the FEX for the Drop classes, it will keep sending traffic that may be dropped in the fabric. The Gen1 Menlo and Palo adapters do honor the PFC message for all classes.
What this means is that while the FEX does not have the same (8) queues of the Fabric Interconnect or Palo adapter, the FEX aims to remove itself from the equation by placing more of the QoS burden on these more capable devices. From both a QoS and networking perspective, the FEX behaves like a transparent no-drop bump in the wire.
Is it perfect? No. In the ideal situation the FEX, in addition to pushing the congestion back, it would also have (8) COS-based queues for a consistent QoS bandwidth policy at every point. Is it pretty darn good? Yes! :-) Especially when compared to the alternative 10GE blade server solutions that have no concept of QoS to begin with.
Input output module
END HOST Mode
Unified Computing System (UCS) fabric interconnect running in end host mode do not function like regular LAN switches. They don’t forward frames based on destination MAC addresses and they don’t run any switching protocol for either Ethernet (e.g. STP) or FC (e.g. FSPF, Domain manager, etc). This is because by definition a UCS system should exist at the edge of the LAN. A regular switch connected to the UCS system will see it as a host with a large number of MAC addresses and network interface cards. By default all of the network ports in a fabric interconnect are in end host mode. A fabric interconnect has four different types of ports for performing different functions. The network uplink ports connect via Ethernet or Fiber channel to LAN/SAN respectively. The server ports and the fabric ports connect to the fabric extender in the chassis. The management port connects to the out-of-band management network. Two clustering ports connect to the UCS manager instances in the peered fabric interconnects.
End Host Mode Overview
In Ethernet end host mode forwarding is based on server-to-uplink pinning. A given server interface uses a given uplink regardless of the destination it’s trying to reach. Therefore, fabric interconnects don’t learn MAC addresses from external LAN switches, they learn MACs from servers inside the chassis only. The address table is managed so that it only contains MAC addresses of stations connected to Server Ports. Addresses are not learned on frames from network ports; and frames from Server Ports are allowed to be forwarded only when their source addresses have been learned into the switch forwarding table. Frames sourced from stations inside UCS take optimal paths to all destinations (unicast or multicast) inside. If these frames need to leave UCS, they only exit on their pinned network port. Frames received on network ports are filtered, based on various checks, with an overriding requirement that any frame received from outside UCS must not be forwarded back out of UCS. However fabric interconnects do perform local switching for server to server traffic. This is required because a LAN switch will by default never forward traffic back out the interface it came in on.
The Unified Computing System (UCS) fabric interconnect is a networking switch or head unit where the UCS chassis, essentially a rack where server components are attached, connects to. ... Access to networks and storage is then provided through the UCS fabric interconnect.
Quad Small Form-Factor Pluggable Plus (QSFP+) port for rack-mount server connectivity
The Quad Small Form-factor Pluggable (QSFP) is a compact, hot-pluggable transceiver used for data communications applications. The form factor and electrical interface are specified by a multi-source agreement (MSA) under the auspices of the Small Form Factor Committee. It interfaces networking hardware (such as servers and switches) to a fiber optic cable or active or passive electrical copper connection. It is an industry format jointly developed and supported by many network component vendors, allowing data rates from 4x1 Gb/s for QSFP and 4x10 Gbit/s for QSFP+ and to the highest rate of 4x28 Gbit/s known as QSFP28 used for 100 Gbit/s links.
Ethernet Switching Mode
The Ethernet switching mode determines how the fabric interconnect behaves as a switching device between the servers and the network. The fabric interconnect operates in either of the following Ethernet switching modes:
- End-host mode allows the fabric interconnect to act as an end host to the network, representing all server (hosts) connected to it through vNICs. This is achieved by pinning (either dynamically pinned or hard pinned) vNICs to uplink ports, which provides redundancy toward the network, and makes the uplink ports appear as server ports to the rest of the fabric. When in end-host mode, the fabric interconnect does not run the Spanning Tree Protocol (STP) and avoids loops by denying uplink ports from forwarding traffic to each other, and by denying egress server traffic on more than one uplink port at a time. End-host mode is the default Ethernet switching mode and should be used if either of the following are used upstream: Layer 2 switching for L2 aggregation Virtual Switching System (VSS) aggregation layer
- When end-host mode is enabled, if a vNIC is hard pinned to an uplink port and this uplink port goes down, the system cannot re-pin the vNIC, and the vNIC remains down.
- Switch mode is the traditional Ethernet switching mode. The fabric interconnect runs STP to avoid loops, and broadcast and multicast packets are handled in the traditional way. Switch mode is not the default Ethernet switching mode, and should be used only if the fabric interconnect is directly connected to a router, or if either of the following are used upstream: Layer 3 aggregation VLAN in a box
For both Ethernet switching modes, even when vNICs are hard pinned to uplink ports, all server-to-server unicast traffic in the server array is sent only through the fabric interconnect and is never sent through uplink ports. Server-to-server multicast and broadcast traffic is sent through all uplink ports in the same VLAN.
Server and Uplink Ports on the 6100 Series Fabric Interconnect
Each 6100 series fabric interconnect has a set of ports in a fixed port module that you can configure as either server ports or uplink Ethernet ports. These ports are not reserved. They cannot be used by a Cisco UCS domain until you configure them. You can add expansion modules to increase the number of uplink ports on the fabric interconnect or to add uplink Fibre Channel ports to the fabric interconnect.
- You need to create LAN pin groups and SAN pin groups to pin traffic from servers to an uplink port.
Ports on the 6100 series fabric interconnect are not unified. For more information on Unified Ports, see Unified Ports.
Each fabric interconnect can include the following port types: Server Ports
Server ports handle data traffic between the fabric interconnect and the adapter cards on the servers.
You can only configure server ports on the fixed port module. Expansion modules do not include server ports. Uplink Ethernet Ports
Uplink Ethernet ports handle Ethernet traffic between the fabric interconnect and the next layer of the network. All network-bound Ethernet traffic is pinned to one of these ports. By default, Ethernet ports are unconfigured. However, you can configure them to function in the following ways: Uplink FCoE Appliance
You can configure uplink Ethernet ports on either the fixed module or an expansion module. Uplink Fibre Channel Ports
Uplink Fibre Channel ports handle FCoE traffic between the fabric interconnect and the next layer of the storage area network. All network-bound FCoE traffic is pinned to one of these ports.
By default, Fibre Channel ports are uplink. However, you can configure them to function as Fibre Channel storage ports. This is useful in cases where Cisco UCS requires a connection to a Direct-Attached Storage (DAS) device.
You can only configure uplink Fibre Channel ports on an expansion module. The fixed module does not include uplink Fibre Channel ports.