Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
At this time is the massive day in 2019 for the bulletins of Intel Enterprise merchandise. It combines some merchandise that ought to be obtainable immediately and some others that might be obtainable within the subsequent few months. As an alternative of selecting a staggered method, we’ve got every little thing in a single: processors, accelerators, networks, and edge compute. Here’s a temporary overview of what's occurring immediately, in addition to hyperlinks to all of our deeper dive articles, our critiques, and the announcement evaluation.
Cascade Lake: Intel's New Server and Enterprise CPU
The headliner for this pageant is the brand new second-generation Xeon Scalable processor, Cascade Lake. That is the processor that Intel will drive strongly all through its company portfolio. That is very true for OEMs like Dell, HP, Lenovo, Supermicro, QCT and others who’re all updating their product traces with the brand new . (A few of her bulletins will be discovered right here: Dell Supermicro )
Whereas these new CPUs don’t use a brand new microarchitecture in comparison with the primary era of Skylake-based Xeon Scalable processors, Intel shocked the vast majority of the Tech Day press with a wide range of enhancements in different areas of Cascade Lake. Not solely is there extra discount towards Specter and Meltdown than anticipated, however we additionally assist the everlasting Optane DC reminiscence. The high-volume processors increase efficiency by as much as 25% additional cores, and every processor will get twice the reminiscence assist (and sooner reminiscence as effectively). The usage of the most recent manufacturing applied sciences permits frequency enhancements, which together with the brand new AVX-512 modes dramatically enhance machine studying efficiency for individuals who can use them.
Intel Xeon scalable
 As much as 28
 As much as 56
 As much as 28
1 MB L2 per core
As much as 38.5 MB shared L3
1 MB L2 per core
As much as 38.5 MB shared L3
As much as 48 trains
As much as 48 trains
1.5 TB Commonplace
768 GB customary
As much as four.5 TB per processor
AVX-512 VNNI with INT8
Variant 2, three, 3a, four,
Specter / Meltdown
 As much as 205 W
 As much as 400 W
As much as 205 W
New to the Xeon Scalable household is the AP processor line. Intel gave a touch these final years, however we lastly received some particulars. This new Xeon Platinum 9200 household of elements combines two 28-bit silicon bits in a single bundle with as much as 56 cores and 112 threads with 12 reminiscence channels in a thermal housing of as much as 400 W. That is basically a 2P configuration a single chip, designed for high-density deployments. These BGA-only CPUs are offered straight from OEMs solely with an underlying platform developed by Intel and haven’t any direct value – clients pay for the answer, not for the product.
Intel is not going to produce fashions with F-Omnipath cloth on board for this era. As an alternative, there are some "M" fashions with 2 TB of storage assist and "four.5" TB "L" fashions designed for the Optane markets. There can even be different, partly new letter names:
M = Medium Reminiscence Assist (2.zero TB)
L = Giant Reminiscence Assist (four.5 TB)
Y = pace choice fashions (see under)
N = Networking / NFV specialised
V = Digital Machine Density Worth Optimized
T = lengthy life / thermal
S = search optimized
Of all these fashions, the Velocity Choose fashions are & # 39; Y & # 39; essentially the most attention-grabbing. These present extra energy monitoring instruments that permit purposes to be coupled to particular cores that may obtain increased efficiency than different cores. This distributes the obtainable vitality to totally different areas of the cores, relying on what must be prioritized. These elements additionally permit for 3 totally different OEM-specified base and turbo frequency settings, permitting a system to be targeted on three various kinds of workloads.
We’re within the means of writing our essential evaluation and plan to sort out the subject in a number of tales from totally different views. Keep tuned. We have now the SKU lists and our begin day information right here:
The opposite key component for the processors is the Optane assist mentioned subsequent.
Optane DCPMM: Persistent Reminiscence Modules for Information Facilities
In case you are confused about Optane, you aren’t the one one.
By and huge, Intel has two totally different Optane varieties: Optane Storage and Optane DIMMs. The reminiscence merchandise have been in the marketplace for fairly a while, each client and enterprise, and have distinctive random entry latency past what NAND can do, however at a value. For customers who can recoup prices, this can be a nice product for this market.
Optane in reminiscence module type issue truly works on the DDR4-T customary. The product is concentrated on the enterprise market, and whereas Intel has been speaking about "Optane DIMMs" for some time, immediately is the "official launch". Chosen clients are already testing and utilizing it, whereas normal availability is due within the subsequent few months.
Optane DC Persistent Reminiscence, to formally identify it, is available in a DDR4 type issue and works with Cascade Lake processors to allow massive quantities of storage in a single system – as much as 6TB in a dual-socket platform , The Optane DCPMM is barely slower than typical DRAM, however permits a lot increased storage density per socket. Intel gives three totally different modules, both 128 GB, 256 GB or 512 GB. Optane doesn’t utterly change DDR4 – you will have no less than one customary DDR4 module within the system to get it operating (it acts as a buffer), however it means clients can mix 128GB DDR4 with 512GB Optane for a complete of 768GB , As an alternative of taking a look at 256 GB of pure DDR4 with NVMe.
With Optane DCPMM in a single system, it may be utilized in two modes: storage mode and App Direct.
The primary mode is the best mode to consider: as a DRAM. The system will see the massive DRAM allocation, however the truth is the Optane DCPMM is used as the principle reminiscence and the DDR4 because the buffer for it. If the buffer comprises the wanted information instantly, studying and writing a typical DRAM is quick whereas it’s in Optane, it’s a bit slower. How that is negotiated is between the DDR4 controller and the Optane DCPMM controller of the module, however this finally works effectively for giant DRAM installations, fairly than retaining every little thing in slower NVMe.
The second mode is App Direct. On this case, the DRAM behaves like a big storage drive as quick as a RAM disk. Whereas not hard-bootable, this disk retains the cached information between boots (a good thing about everlasting reminiscence) and permits very quick reboots to keep away from critical downtime. App Direct mode is a little more esoteric than "simply a considerable amount of DRAM," as builders might have to revamp their software program stack to reap the benefits of the DRAM-like speeds that this disk supplies. It’s basically a big RAM disk on which the info resides. (ed: I'll take two)
One of many issues when Optane was first introduced was whether or not it could assist sufficient learn / write cycles to behave as a DRAM, for the reason that similar know-how was additionally used for storage. To mitigate fears, Intel will assure every Optane module for three years, even when this module is operated with stylus suggestions throughout the whole guarantee interval. Not solely does this imply that Intel depends by itself product, however it additionally convinces the very skeptical Charlie of SemiAccurate, who has been criticizing the know-how for a very long time (largely resulting from lack of pre-launch data). however he appears to be glad for now.
Costs for Intel's Optane DCPMM usually are not identified presently. The official line is that there isn’t any particular MSRP for the totally different sized modules. It in all probability is dependent upon which clients find yourself buying within the platform, how a lot, what assist stage, and the way Intel interacts with them to optimize the setup. It’s seemingly that cloud suppliers will provide situations supported by Optane DCPMM, and OEMs corresponding to Dell point out that techniques are scheduled for normal availability in June. Dell stated customers anticipate to have the ability to use the big storage mode to get began first. Customers who might be able to speed up the workflow with App Direct mode might have a while to rewrite their software program.
Intel has enabled us to remotely entry some techniques which have Optane DCPMM put in. We're nonetheless discovering one of the best ways to judge the , so keep tuned.
Intel Agilex: The New Era of the Intel FPGA
The acquisition of Altera a couple of years in the past was an enormous information for Intel. The thought was to introduce FPGAs into the Intel household of merchandise, and at last to understand a variety of synergies between the 2, to combine the portfolio whereas leveraging Intel's manufacturing gear and distribution channels. Regardless of this occasion in 2015, each product previous to this acquisition was developed previous to the combination of the 2 corporations – till immediately. The brand new Agilex household of FPGAs is the primary totally developed and produced beneath the identify of Intel .
The announcement for Agilex is immediately, however the first 10nm samples might be obtainable within the third quarter. The position of the FPGA has developed lately, from the supply of general-purpose for house functions to hardened accelerators and the enabling of latest applied sciences. With Agilex, Intel desires to ship that blend of acceleration and configuration, not simply with the core array of gates, but additionally with extra chiplet extensions enabled by Intel's Embedded Multi-Die Interconnect Bridge (EMIB) know-how develop into. These chiplets could also be customized third-party IP addresses, PCIe 5.zero, HBM, 112G transceivers, and even Intel's new Compute eXpress Hyperlink cache-coherent structure. Intel promotes as much as 40 TFLOPs with DSP efficiency and promotes use in Combined Precision Machine Studying with robust assist for bfloat16 and INT2 by way of INT8.
Intel launches Agilex in three product households: F, I, and M in that order of time and complexity. The Intel Quartus Prime software program for programming these units might be up to date in April, however the first F fashions might be obtainable within the third quarter.
Columbiaville: With Intel 800 Controllers for 100 GbE
At the moment, Intel gives a wide range of 10 Gigabit Ethernet and 25 Gigabit Ethernet infrastructures in its information middle. The corporate launched 100G Omnipath as an early different a couple of years in the past, and plans a second era Omnipath to double that pace. Within the meantime Intel has developed and can launch its controller providing Columbiaville for the 100G Ethernet market known as the Intel 800 collection.
The introduction of sooner connectivity to the info middle infrastructure is definitely optimistic, however Intel intends to make use of the product to advertise some new applied sciences. Software design queues (ADQs) assist speed up precedence packets to make sure constant efficiency, whereas DDP (Dynamic System Personalization) permits extra programming performance inside packet ends for distinctive community settings for added performance and / or safety ,
The twin-port 100G card is known as E810-CQDA2, and we're nonetheless ready for details about the chip: measurement, value, course of, and many others. Intel signifies that its 100GbE choices might be obtainable within the third quarter might be.
Xeon D-1600: Enhancing Generational Effectivity for Edge Acceleration
One in all Intel's key product areas is the lead in each computing and networking. One of many merchandise that Intel has targeted on on this space is Xeon D, which covers both the high-speed computing with accelerated meshing and cryptography (D-1500) or the excessive throughput computation with accelerated meshing and cryptography (D-2100). The primary is effectively established in Broadwell and the second is predicated in Skylake. Intel's new Xeon D-1600 is a direct successor to D-1500: A real single-die resolution that provides additional frequency and effectivity to the manufacturing course of. It’s nonetheless constructed on the identical manufacturing course of because the D-1500, so Intel companions can simply arrange the brand new model with out many adjustments in performance.