GHZ Magazine Project

Page 1

1



An Introduction In this issue of GHz magazine, we highlight some of the more in-depth concepts when it comes to really opting a PC for more performance, talking about the why and how, along with talking about contemporary topics of the PC marketplace to help some of our readers get a quick recap of what has been up. It brings us joy to share this work with our readers as it is our goal to be an innovative magazine outlet dedicated to the discussion of new technologies for the PC enthusiasts who want the details, the kind of people that want to really know why the latest and greatest hardware is the best. Where we strive to be the informational outlet of leading edge components that make life all the better along with providing indepth analysis, comparisons, and where to get some of them.

3


CONTENTS 6

4

GTX1050 Revival

10

CPU Cooling

18

Graphics Battle

22

Overclocking Interview with

28

Jensen Huang

32

NVMe

36

Credits


5


6


More Than Rumors The rumors are true: Nvidia is supplementing the sky-high demand for GeForce RTX 30-series graphics cards by releasing stock of older GPUs to board partners, who can then use those to craft new hardware. It’s a smart solution for the frustrating graphics card market we’re currently mired in.“There have been a lot of rumors recently about Nvidia releasing more GPUs to AIBs from older generations—the [GeForce] RTX 2060 and GTX 1050 Ti were specifically mentioned,” I asked an Nvidia spokesperson via email. “Can you comment on whether there’s any truth to that?”

“The products referenced below were never EOLed [end-of-lifed—ed]. So ‘reviving’ seems like the wrong terminology to use here,” Nvidia told me, referencing the use of the word “reviving” in my subject line. “More of an ebb and flow really. We’re just meeting market demand which remains extremely high as you noted.” So there you have it. Yes, Nvidia is also supplying older GPUs to feed the insatiable demand for graphics cards. Again: It’s a smart move, as the GeForce RTX 2060 and GTX 1050 Ti sidestep some of the conditions creating such a volatile market.

7


Why reviving older GeForce GPUs makes sense The RTX 2060 debuted in January 2019 for $350. It supports Nvidia’s ray-tracing and DLSS ambitions but is built using an older 12nm TSMC manufacturing process, avoiding the hotly in-demand current nodes. (Nvidia’s RTX 30-series GPUs are built on Samsung 8nm, while shortages for AMD products and next-gen consoles stem from a logjam on TSMC’s 7nm node.) Ramping up production of this GPU puts more relatively current graphics cards with modern features on the streets, and creating hardware with firmly established manufacturing nodes tends to be much cheaper than building GPUs on a cutting-edge process.

8

The GeForce GTX 1050 Ti, on the other hand, launched all the way back in 2016 for $140. Before you call it obsolete, consider that the humble GPU remains the second-mostpopular graphics card on Steam (with the RTX 2060 at fifth), and deploying it again provides some key strategic advantages. Perhaps most important? It comes with 4GB of GDDR5 memory. For months, rumors have said that there’s a shortage of the GDDR6 memory used with modern GPUs, and if that’s indeed the case, rolling out the GDDR5-equipped 1050 Ti again lets Nvidia build affordable hardware without tapping into its precious GDDR6

reserves. The capacity matters just as much. Part of the reason for the current GPU crunch is due to the Ethereum cryptocoin mining bubble, which is insanely profitable right now. Ethereum gets mined on graphics cards, unlike Bitcoin, but requires a GPU with more than 4GB of onboard memory. Bottom line: the reborn GTX 1050 Ti can’t be used to mine Ethereum so it won’t appeal to those buyers. Better yet, as a 75W card that can slip into any computer’s PCIe slot without the need for extra power connectors, the GTX 1050 Ti is an easy and affordable upgrade for newfound PC gamers who took to the hobby while stuck at home for months. Just slap it in and start playing!


Not all that affordable The extra stock for these older cards alleviates all different kinds of pressure for Nvidia and its partners, but the RTX 2060 and GTX 1050 Ti are still subject to today’s market pressures themselves. We can find a single GTX 1050 Ti for $190 on Newegg currently, but most are selling for $400 or more. Again, these cards debuted for $140 in 2016. Only a handful of RTX 2060 models are available on Newegg, starting at $800. Those prices all remain outrageous for what you’re getting as a PC gamer.

And despite Nvidia’s statement that these two GPUs were never formally sent out to pasture, make no mistake: The GTX 1050 Ti was in all but name, at the very least. It has since been replaced by the GTX 1650, and retailers told Tech Yes City (who broke the revival news) that “it feels like two years ago” since they last saw stock of the GTX 1050 Ti.

The rebirth of older graphics cards makes you wonder if we’ll see budget-minded next-gen cards anytime soon though. Nvidia is selling every high-margin RTX 30-series GPU it can manufacture right now, and if classic graphics cards are holding down the entry-level market to some degree at least, it makes business sense to focus on products that make you the most money. We’ll see how it goes. Stay tuned to the latest must-know news over at our roundup of the best graphics cards for PC gaming.

9


Too Cool, Ai

10

r or

Liq u i d


Here’s what you need to know when choosing between liquid cooling vs air cooling, including how these two methods work, and which one is right for you.

By Intel

11


“During normal operation, the transistors inside a CPU convert electrical energy into thermal energy (heat). This heat increases the temperature of the CPU. If an efficient path for that heat doesn’t exist, then the CPU will exceed its safe operating temperature.” -Mark Gallina, Intel Principle Engineer

12


Submissions from Johnathan Hu and Bill Salnik

Like any powerful piece of PC hardware, the CPU generates heat when in operation and needs to be properly cooled to achieve maximum performance. But what’s the best way to keep your CPU operating at the ideal temperature? There are many ways to cool a processor, but most desktops and laptops use an airor liquid-based cooler. We’re going to talk about liquid cooling vs air cooling: how they work, the pros and cons of each, and which might be right for your setup.

How a CPU Cooler Works Both air and liquid CPU coolers operate on a similar principle, and both do essentially the same thing: absorb heat from the CPU and redistribute it away from the hardware. The heat generated by the processor itself is distributed to the metal lid of the CPU, called the Integrated Heat Spreader (IHS). The heat is then transferred to the baseplate of the CPU cooler. That heat is then distributed, either by liquid or via heat pipe, to a fan, where it is blown away from the cooler and eventually away from the PC. Though the underlying mechanics are similar, the two methods achieve this heat redistribution in very different ways. Let’s start with an air cooler.

Cooling with Air

Cooling with Liquid

In an air cooler, the heat is transferred from the IHS of the CPU, through the applied thermal paste, and into a conductive baseplate that is usually made from copper or aluminum. From the baseplate, that thermal energy moves into the attached heat pipes. The heat pipes are designed to conduct heat from one location to another. In this case, the heat moves to a heatsink that is elevated off of the motherboard, freeing up space for other components, such as RAM. These pipes deliver the energy in the form of heat to the thin metal fins that make up the heatsink. These fins are designed to maximize exposure to the cooler air, which then absorbs the heat from the metal. An attached fan then pushes the warm air away from the heatsink. The effectiveness of an air cooler can vary, depending on factors such as the materials used in construction (copper is more conductive than aluminum, for example, though aluminum is cheaper) and the size and quantity of fans attached to the CPU heatsink. This explains the variation in the size and design of air-based CPU coolers. Larger air coolers usually dissipate heat better, but there isn’t always room for a bulky cooling solution, especially in a small form factor PC. We’ll further explore the advantages of air cooling, but first, let’s go over liquid cooling for the sake of comparison.

As with air coolers, there’s a wide selection of available options, but most fall into two categories: All-in-One (AIO) coolers, or custom cooling loops. We’ll mostly be focusing on All-in-One (AIO) coolers here, though the fundamental principles of how the liquid cools the CPU are the same in both. Similar to air cooling, the process starts with a baseplate that is connected to the IHS of the CPU with a layer of thermal paste. This allows for better heat transfer between the two surfaces. The metal surface of the baseplate is part of the waterblock, which is designed to be filled with coolant. The coolant absorbs heat from the baseplate as it moves through the waterblock. It then continues to move through the system and upward through one of two tubes to a radiator. The radiator exposes the liquid to air, which helps it cool, and fans attached to the radiator then move the heat away from the cooler. The coolant then reenters the waterblock, and the cycle begins again.

13


Submission by Stephen Anders

Which Is Right for You?

Ease of Installation

Size

Both cooling options are highly effective when properly implemented, but excel in different circumstances. Here are a few factors to consider when making your choice.

Though an All-in-One (AIO) liquid cooler is often more complex to install than a standard air cooler, it’s still fairly straightforward. Most consist of only the waterblock, the two hoses that cycle the coolant, and the radiator. The extra steps involve attaching the waterblock, which is a process similar to installing an air cooler, and then attaching the radiator and the fans in such a way that the excess heat can easily exit the PC. Since the coolant, pump, and radiator are self-contained in the apparatus (hence the name “All-in-One (AIO)”), it requires very little oversight or maintenance after installation. Installing a custom loop, on the other hand, requires more effort and education on the part of the builder. The initial installation process might be more time-consuming, but the added flexibility allows for significantly more customization and the option of including other components such as a GPU into the loop if desired. These more complex custom loops can also support builds of all shapes and sizes when properly implemented.

Air coolers can be bulky, but that bulk is limited to one area, as opposed to being distributed across your system. With an All-in-One (AIO), on the other hand, you’ll need space for the radiator, and will also need to factor in issues like proper orientation and alignment of the waterblock and coolant tubes. That said, if you’re working in a smaller build, a bulky air cooler might not be the best option. A low-profile air cooler or an All-in-One (AIO) with a small radiator could be a better fit. When planning your upgrade or choosing your case, ensure that you have sufficient space for your cooling solution of choice and that your case supports the hardware you’ve selected.

Price Price can vary substantially depending on the features you’re prioritizing. Generally speaking, though, air coolers cost less due to their more straightforward operation. There are entry-level and premium versions of both. A premium version of an air cooler might have a larger heatsink, better fans, and provide different aesthetic options. A high-end All-in-One (AIO) liquid cooler might have a larger radiator, and offer a mix of aesthetic and functional customization, such as software to control fan speeds and lighting. Both air and liquid CPU coolers are priced across a large spectrum, depending on the features you’re looking for.

14


SIZE. SOUND. TEMPERATURE.

157


Sound Liquid cooling, especially when using an All-in-One (AIO), tends to be quieter than the fan on a CPU heatsink. Again, this can vary, in that there are air coolers with fans specifically designed to reduce noise, and fan settings or fan selection can impact the amount of noise generated. Overall, though, liquid cooling tends to generate less sound, as the small pump is usually well insulated, and radiator fans tend to run at lower RPM (revolutions per minute) than those on the CPU heatsink.

Temperature Regulation If you’re serious about overclocking, or plan on undertaking CPU-intensive tasks like rendering video or streaming, liquid cooling might be the best choice. According to Mark Gallina, liquid cooling more “efficiently distributes heat over more convection surface area (radiator) than pure conduction, allowing for reduced fan speeds (better acoustics) or higher total power.” In other words, it’s more efficient, and often quieter. If you want the lowest possible temperatures, or if you’re interested in a quieter solution and don’t mind a slightly more complex installation process, liquid cooling is probably the best option. Air coolers are quite good at relocating heat away from the CPU, but keep in mind that heat is then dispersed into the case. This can raise the ambient temperature of the system overall. Liquid coolers do a better job of relocating that heat outside of the system via the fans on the radiator.


Make Your Choice So, back to the original debate: Liquid cooling vs air cooling. Which is better ? The answer depends on how you use your computer and the performance and workloads you expect to encounter. If you want almost silent operation, the most efficient cooling, and don’t mind a potentially higher price tag, liquid cooling will fit the bill. If you’re looking for a solution with more entry-level pricing and simple installation at the potential expense of peak performance or acoustics, air cooling is an easy recommendation. Consider how you use your PC, and how you plan to use it in the future when making your choice. Though both are excellent solutions, they are designed for slightly different use cases. It’s up to you to decide which is a better fit for how you use your computer.


With the current market of GPU’s being heavily impacted by a component shortage, the GPU selection has come down to slim pickings and so it has many people who are either looking to build a pc or looking to update in a sticky situation. Here, we will take a look back at the GTX 1060 6gb and the GTX 1650X 4gb to see if it is worth getting one of these cards during this shortage or if it is just better to wait once more.

18


Resolution

Architecture Despite both being developed by Nvidia, on an architectural level, these two cards are quite different. Let’s unpack the architecture of the two. As part of Nvidia’s GeForce 10 series, the GTX 1060 features Pascal architecture, which by today’s standards, might be considered outdated by some. At the time of release back in 2016, Nvidia promised a speed uplift of up to 6x that of older cards, as well as improvements across a variety of areas, including higher

memory bandwidth, more CUDA cores, and better overall efficiency. In addition, the 1060 has been developed using TSMC’s 16 nm FinFET process. On the other hand, despite having a similar name, the GTX 1650 is actually a part of Nvidia’s GeForce 16 series. However, unlike faster GPUs (such as the RTX 2060) the 1650 does not integrate any raytracing or tensor cores. Despite that, it is based on Turing architecture, which means that the 1650 is the most ‘future-proof’ out of the two, as it features the necessary hardware to handle the latest games of today.

The next specification we’re going to discuss is the resolution. The GTX 1650 utilizes a TU117 GPU, which is a smaller and more budget-friendly variant of the TU116 that powers other 16-series cards like the GTX 1660 and 1660 Ti. In order to give it a lower price point, Nvidia has also given the 1650 just 896 CUDA cores compared to the 1060’s 1280, which means that the 1650’s overall speed has taken a hit. On average, you can expect the GTX 1660 to perform with higher frame rates across both 1440p and 1080p.

Dimensions Before you make your selection, it’s first important to double-check that the card will be compatible with your current gaming set-up. Both cards have pretty similar system requirements, which we’ll compare. The GTX 1650 is a dual-slot card that requires a PCI Express motherboard with a dual-width x16 graphics card slot. The card’s dimensions are as follows: 229mm x 111mm x 35mm. Similarly, the GTX 1060 is also a dual-slot card and requires a PCI Express motherboard with a dual-width x16 graphics card slot. The GTX 1060s dimensions are as follows: 250mm x 111mm.

19


GTX1060

Cooling The next specification we’re going to be comparing is the cooling systems of each card, as this will help to indicate to you how well each GPU card will be able to handle heavy workloads without becoming overburdened. To start, the GTX 1060 has been carefully designed by Nvidia engineers to contain premium materials that will help to keep the interior cool. It has one fan and contains thermals that will be able to prevent the interior from overheating, even under intense workloads. 20

As for the GTX 1650, Nvidia has given this card a dual-fan design which ensures that overheating will be significantly reduced, all the while allowing for a consistent airflow throughout the interior. The GTX 1650 also contains thermal pads, which will offer an additional level of cooling. Despite the GTX 1650 clearly offering better more advanced cooling features, one thing to keep in mind here is that both cards will likely make noise while in operation, especially when managing heavier workloads, as they do not contain a vapor chamber.


GTX1650

Ray Tracing

VRAM

If you’re a longstanding fan of Nvidia, then we’re sure you’ll already be aware that they officially began making raytracing commercially available with the introduction of their GeForce 20 series RTX cards. Despite that, Nvidia did in fact roll out a downloadable driver update back in 2019 that allowed for ray-tracing to be enabled in older-generation cards. The only downside? Both cards lack RT and Tensor cores that make ray-tracing possible so, even though you can technically use ray-tracing, the frame rates will be very low, and the overall performance will take a big hit.

Next up, we’re going to be taking a look at the ‘VRAM’ which essentially refers to the video memory capabilities. Generally speaking, the video memory is what helps a card render complex graphics smoothly and without lagging, so the more a card has, the better. Both the GeForce GTX 1650 and the GeForce GTX 1060 feature GDDR5 memory, which is widely considered to be one of the best types of graphics memory. It has been purpose-built to improve bandwidth and allows for a much wider overall bus. In the GeForce GTX 1060, the GDDR5 memory is

paired with a base clock of 1506MHz and can be boosted up to 1709MHz. The memory also runs at a frequency of 8MHz. As for the GeForce GTX 1650, it runs at 8Gbps and features a slightly lower base clock of 1485MHz, and can be boosted up to 1665MHz.

21


22


Not many people know exactly what overclocking is, but have probably heard the term being used before. here is the chance to learn what it is and decide whether or not it is something you should try on your computer.

23


What is Overclocking? To put it in its simplest terms, overclocking is taking a computer component such as a processor and running at a specification higher than rated by the manufacturer. In other words, you can run your computer harder and faster than it was designed to run if you overclock it. Companies such as Intel and AMD rate every part they produce for specific speeds. They test the capabilities of each one and certify it for that given speed. The companies underrate most parts to allow for increased reliability. Overclocking a part takes advantage of its remaining potential.

24

Why Overclock? The primary benefit of overclocking is additional computer performance without the increased cost. Most individuals who overclock their system either want to try and produce the fastest desktop system possible or to extend their computer power on a limited budget. In some cases, users can boost their system performance by 25 percent or more. For example, a person may buy something like an AMD 2500+ and, through careful overclocking, end up with a processor that runs at the equivalent processing power as an AMD 3000+, but at a significantly reduced cost.

There are drawbacks to overclocking a computer system. The biggest drawback to overclocking a computer part is that you are voiding any warranty provided by the manufacturer because it is not running within its rated specification. Pushing overclocked components to their limits tends to result in reduced functional lifespan or even worse if improperly done, catastrophic damage. For that reason, all overclocking guides on the internet will have a disclaimer warning individuals of these facts before telling you the steps to overclocking.


Bus Speeds and Multipliers All CPU processor speeds are based on two distinct factors: bus speed and multiplier. The bus speed is the core clock cycle rate that the processor communicates with items such as the memory and the chipset. It is commonly rated in the MHz rating scale, referring to the number of cycles per second at which it runs. The problem is the bus term is used frequently for different aspects of the computer and will likely be lower than the user expects. For example, an AMD XP 3200+ processor uses a 400 MHz DDR memory, but the processor uses a 200MHz frontside bus that is clock doubled to use 400 MHz DDR memory. Similarly, a Pentium 4 C processor has an 800 MHz frontside bus, but it’s actually a quad pumped 200 MHz bus. The multiplier is the actual number of processing cycles a CPU will run in a single clock cycle of the bus speed. So, a Pentium 4 2.4GHz “B” processor is based on the following:

When overclocking a processor, these are the two factors that can influence the performance. Increasing the bus speed will have the greatest impact as it increases factors such as memory speed (if the memory runs synchronously) as well as the processor speed. The multiplier has a lower impact than the bus speed, but can be more difficult to adjust. Because overclocking was becoming a problem from some unscrupulous dealers who were overclocking lower-rated processors and selling them as higher-priced processors, the manufacturers started to implement hardware locks to make overclocking more difficult. The most common method is through clock locking. The manufacturers modify traces on the chips to run only at a specific multiplier. A user can defeat this protection by modifying the processor, but it is much more difficult.

133 MHz x 18 multiplier = 2394MHz or 2.4 GHz

25


Dealing With Heat The biggest obstacle to overclocking the computer system is overheating. Today’s high-speed computer systems already produce a large amount of heat. Overclocking a computer system compounds these problems. As a result, anyone planning to overclock their computer system should understand the requirements for high-performance cooling solutions.

26

The most common form of cooling a computer system is through standard air cooling: CPU heatsinks and fans, heat spreaders on memory, fans on video cards, and case fans. Proper airflow and suitable conducting metals are vital to the performance of air cooling. Large copper heatsinks tend to perform better, and extra case fans to pull air into the system also help to improve cooling.

Beyond air cooling, there is liquid cooling and phase change cooling. These systems are far more complex and expensive than standard PC cooling solutions, but they offer higher performance at heat dissipation and generally lower noise. Well-built systems can allow the overclocker to push the performance of their hardware to its limits, but the cost can end up being more expensive than the processor cost. The other drawback is liquids running through the system that can risk electrical shorts damaging or destroying the equipment.


Managing the Voltage

The Components

Every computer part has a specific voltage for its operation. During the overclocking process, the electrical signal might degrade as it traverses the circuitry. If the degradation is enough, it can cause the system to become unstable. When overclocking the bus or multiplier speeds, the signals are more likely to get interference. To combat this, you can increase the voltage to the CPU core, memory, or AGP bus. There are limits to how much more a user can apply to the processor. If you apply too much, you could destroy the circuits. Typically this is not a problem because most motherboards restrict the setting. The more common issue is overheating. The more you supply, the higher the thermal output of the processor.

There are a lot of factors that will affect whether you can overclock a computer system. The first and foremost is a motherboard and chipset that has a BIOS that allows the user to modify the settings. Without this capability, it’s not possible to alter the bus speeds or multipliers to push the performance. Most commercially available computer systems from the major manufacturers do not have this capability. Those interested in overclocking tend to buy parts and build computers. Beyond the motherboard’s ability to adjust CPU settings, other components must also be able to handle the increased speeds. Buy memory that is rating or tested for higher speeds to preserve the best memory performance. For example, overclocking an Athlon XP 2500+ frontside bus from 166 MHz to 200 MHz requires that the system have PC3200- or DDR400-rated memory.

The frontside bus speed also regulates the other interfaces in the computer system. The chipset uses a ratio to reduce the frontside bus speed to match the interfaces. The three primary desktop interfaces are AGP (66 MHz), PCI (33 MHz), and ISA (16 MHz). When the frontside bus is adjusted, these buses will also be running out of specification unless the chipset BIOS allows for the ratio to be adjusted down. Keep in mind that changing the bus speed can impact stability through the other components. Of course, increasing these bus systems can also improve the performance of them, but only if the parts can handle the speeds. Most expansion cards are very limited in their tolerances, though. If you’re new to overclocking, don’t push things too far right away. Overclocking is a tricky process involving a lot of trial and error. It is best to thoroughly test the system in a taxing application for an extended period to ensure the system is stable at that speed. At that point, step things back a bit to give some headroom to allow for a stable system that has less chance of damage to the components.

27


Looking Forward with Jensen Huang by Katyanna Quach

“Our pioneering work in accelerated computing has led to gaming becoming the world’s most popular entertainment, to supercomputing being democratized for all researchers, and to AI emerging as the most important force in technology” - Jensen Huang, Nvidia CEO

28

image source Nvidia


image source Nvidia

Going Beyond Nvidia continues to grow and beat Wall Street’s expectations amid a global chip shortage. On Wednesday, it revealed bumper figures for the fourth quarter of its fiscal 2021, the three months to January 31, and full-year results. CEO Jensen Huang boasted “Q4 was another record quarter, capping a breakout year” ahead of a conference call with analysts to discuss the numbers. He singled out gaming and the data center as his Silicon Valley corporation’s two biggest areas, both driving up profits. In particular, we’re told, GPU sales to verticals lead Nvidia’s data center revenue growth as opposed to sales to hyperscale cloud giants. “Vertical industries were well over 50 per cent of data center revenue across compute and networking with particular strength in super computing, financial services, higher education and consumer internet verticals,” CFO Colette Kress revealed to analysts on the call.

Last year, the GPU giant launched a slew of processors based on its Ampere architecture. That includes the high-end A100, which is designed for AI supercomputers, servers, and chunky workstations, where the majority of compute power goes into training, developing, and running machine-learning algorithms. It also refreshed its GeForce gaming line with five cards of varying power under the RTX 30 umbrella. Nearly all of them, including the 3090, 3080, 3070, 3060 Ti, were quickly snapped up after they launched. Hardware stocks are low, and aren’t expected to recover any time soon, due to the ongoing drought of semiconductors.

The RTX 3060 card is due to go on sale on Thursday. That’s the one Nvidiaw will throttle at the driver level if it detects it is being used to mine Ethereum-like coins, a move that may deter some miners from grabbing the affordable cards at launch, and thus perhaps leave a few more for gamers to fight over. From next month, Nvidia will tout GPUs specifically for mining. It’s been suggested that this approach, while seemingly sensible on paper, won’t actually achieve a lot if miners defeat the driver-enforced protections, and the crypto-mining cards divert silicon away from gamers. “The entire RTX 30 Series lineup has been hard to keep in stock, and we exited Q4 with channel inventories even lower than when we started,” Kress said on the conference call. “Although we are increasing supply, channel inventories will likely remain low throughout Q1.” Huang added: “At the company-level, we’re constrained. Demand is greater than supply ... We just have to do a really good job planning.” Then, to assure investors, he continued: “We have enough supply to grow each quarter throughout the year.”

29


Despite all this, Nvidia appears to be in good and healthy spirits. Here’s a quick run down on the numbers: Revenues were $5bn for the fourth fiscal quarter, a jump of 61 percent compared to the $3.1bn from a year ago. That also marks the end of its fiscal year 2021, bringing its total revenues to $16.7bn, up 53 per cent from $10.9bn recorded the previous year. Gaming in Q4 pulled in $2.5bn, a jump of 67 per cent from from a year ago. Over 70 new models of laptops launched containing a GeForce RTX 30 graphics chipset. 30

Data Center was the second biggest segment, making up $1.9bn in Q4 revenues. That’s a hike of 97 per cent from the previous year. That does not include Mellanox’s numbers: the acquired networking biz “had a sequential decline impacted by nonreoccurring sales to a China OEM in Q3,” the CFO said. “Starting next quarter ... we will no longer break out Mellanox revenue separately.” Automotive, Professional Visualisation, and OEM are much smaller by comparison, and were

$145m, $307m, $153m, respectively. Revenues shrunk here by 11 per cent, 7 per cent for self-driving cars and graphics rendering in games and animation. OEM, however, was up a smidgen of 1 per cent from the year ago. Net income was $1.46bn for the latest quarter, an increase of 53 per cent from the $950m year-over-year. For the full year, it was up 55 per cent to $4.3bn. Gross margin percentage was 63.1 per cent, down 180 basis points (BPS) from a year ago.


image source The Verge

GAAP earnings per share were $2.31, an increase of 51 per cent from year-over-year. Nvidia’s proposed acquisition of British chip designer Arm is under review by regulators in the US and UK. Not everyone is in favor of it, unsurprisingly; Google, Microsoft, and Qualcomm have voiced their concerns in private. Huang, however, maintained confident that the deal would go through.

“At the time, we predicted it’d take 18 months for [the acquisition to process in] the US, UK, EU, China and other jurisdictions,” he said. “This process is moving forward as expected, we are confident regulators will see the benefit. Together Arm and Nvidia will provide greater choice to the ecosystem and we intend to continue Arm’s open licensing model.

“The pandemic will pass but the world has been changed forever, we see technology being accelerated across every industry. The urgency to digitise, automate, and accelerate has never been greater.”

31


. T S FA . D E T A R G E T IN . E R U T U F E TH . E NNVetM app By

32


What is NVMe? NVMe (nonvolatile memory express) is a new storage access and transport protocol for flash and next-generation solid-state drives (SSDs) that delivers the highest throughput and fastest response times yet for all types of enterprise workloads. Today, in both consumer apps and business, users expect everfaster response times, even as the applications themselves become vastly more complex and resource dependent.

To help deliver a high-bandwidth, low-latency user experience, the NVMe protocol accesses flash storage via a PCI Express (PCIe) bus, which supports tens of thousands of parallel command queues and thus is much faster than hard disks and traditional all-flash architectures, which are limited to a single command queue. The NVMe specification takes advantage of nonvolatile memory in all kinds of computing environments. And it’s future-proof, extendable to work with not-yet-invented persistent memory technologies.

33


Benefits of NVMe for data storage It’s about time – NVMe storage is big news in the enterprise data center because it saves time. Unlike protocols designed in the days of mechanical hard disk drives, NVMe leverages not just solid-state storage, but also today’s multicore CPUs and gigabytes of memory. NVMe storage also takes advantage of streamlined command sets to efficiently parse and manipulate data.

NVMe use cases NVMe storage is already being used in business scenarios where every microsecond counts such as realtime customer interactions such as finance, e-commerce, and software sales agents. Artificial intelligence (AI), machine learning (ML), big data, and advanced analytics apps. DevOps, enabling you to run more iterations in less time.

34


image source Groovypost

NVMe over Fabrics (NVMe-oF)

NVMe over Fibre Channel (NVMe/FC)

NVMe is more than faster flash storage – it’s also an end-to-end standard that enables vastly more efficient transport of data between storage systems and servers. NVMe over Fabrics extends NVMe’s performance and latency benefits across network fabrics such as Ethernet, Fibre Channel, and InfiniBand. Provides higher IOPS and reduced latency from the host software stack all the way through the Data Fabric to the storage array.

With the recent release of NetApp®ONTAP®, NetApp’s AI data management platform provides NVMe over Fibre Channel support today. Many enterprises have built their entire infrastructure around Fibre Channel because of its performance and reliability, plus its support for fabric-based zoning and name services.

Applications such as databases run much faster when using the NVMe/ FC protocol compared to FCP (SCSI protocol with an underlying Fibre Channel connection). ONTAP NVMe/ FC traffic can co-reside with FCP traffic on the same Fibre Channel fabric, so it’s easy to get started with NVMe/FC. For many customers with ONTAP AFF systems, this is simply a nondisruptive software upgrade.

35


CREDITS Typefaces

Images

Roboto

CPU Cooling

Regular Italic Thin Medium Medium Italic Bold Bold Italic Black Black Italic

Jonathan Hu Bill Salnik Stephen Anders

Oleg Magni Eric Sanman

Interview with

Eras ITC

Jensen Huang

Bold

Nvidia The Verge

Octarine Light Italic

Microsoft Sans Serif Regular

36 36

Overclocking

NVMe Groovypost


Articles GTX1050 Revival CPU Cooling Graphics Battle Overclocking

Chacos, Brad. “Confirmed: Nvidia Taps the GTX 1050 Ti to Battle Graphics Card Shortages.” PCWorld, PCWorld, 11 Feb. 2021, www.pcworld.com/article/3607190/nvidia-rtx-30-graphics-card-shortagesgaming-gpu-gtx-1050-ti-geforce-rtx-2060.html.

“CPU Cooler: Liquid Cooling Vs Air Cooling.” Intel, www.intel.com/content/www/us/en/gaming/ resources/cpu-cooler-liquid-cooling-vs-air-cooling.html.

Barclay, Steve. “GTX 1650 vs 1060 - WePC: Let’s Build Your Dream Gaming PC.” WePC, 28 Jan. 2021, www.wepc.com/compare/1650-vs-1060/.

Kyrnin, Mark. “Get the Most Out of Your Computer by Overclocking It.” Lifewire, 8 Apr. 2020, www. lifewire.com/what-is-overclocking-a-computer-4092341.

Interview with

Jensen Huang

NVMe

Quach, Katyanna. “GPUs for Gaming, Data-Center Servers Continue to Drive up Nvidia’s Revenues despite Chip Shortages Everywhere.” The Register® - Biting the Hand That Feeds IT, The Register, 25 Mar. 2021, www.theregister.com/2021/02/25/gpus_for_gaming_and_aipowered/.

NetApp. “What Is NVMe? - Benefits & Use Cases.” NetApp, NetApp, 1 Jan. 1970, www.netapp.com/ data-storage/nvme/what-is-nvme/.

37 37


A Note on Roboto This bi-annual magazine utilizes the Roboto typeface family for the purpose of being a technology based magazine. Roboto was created in 2011 by in-house Google designer, Christian Robertson for Android and has since been the default typeface of Android devices along with many Google services. It was made with the intent of being perfect for online documentation and to balance content density with legibility. Art Direction and Design

David Pham

38


39


40


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.