This week we talk about AMD, graphics processing units, and AI.
We also discuss crypto mining, video games, and parallel processing.
Recommended Book: The Story of Art Without Men by Katy Hessel
Transcript
Founded in 1993 by an engineer who previously designed microprocessors for semiconductor company AMD, an engineer from Sun Microsystems, and a graphics chip designer and senior engineer from Sun and IBM, NVIDIA was focused on producing graphics-optimized hardware because of a theory held by those founders that this sort of engineering would allow computers to tackle new sorts of problems that conventional computing architecture wasn't very good at.
They also suspected that the video game industry, which was still pretty nascent, but rapidly growing, this being the early 90s, would become a big deal, and the industry was already running up against hardware problems, computing-wise, both in terms of development, and in terms of allowing users to play games that were graphically complex and immersive.
So they scrounged about $40k between them, started the company, and then fairly quickly were able to attract serious funding from Silicon Valley VCs, initially to the tune of $20 million.
It took them a little while, about half a decade, to get their first real-deal product out the door, but a graphics accelerator chip they release in 1998 did pretty well, and their subsequent product, the GeForce 256, which empowered consumer-grade hardware to do impressive new things, graphically, made their company, and their GeForce line of graphics cards, into an industry standard piece of hardware for gaming purposes.
Graphics cards, those of the dedicated or discrete variety, which basically means it's a separate piece of hardware from the motherboard, the main computer hardware, gives a computer or other device enhanced graphics powers, lending it the ability to process graphical stuff separately, with tech optimized for that purpose, which in turn means you can play games or videos or whatnot that would otherwise be sluggish or low-quality, or in some cases, it allows you to play games and videos that your core system simply wouldn't be capable of handling.
These cards are circuit boards that are installed into a computer's expansion slot, or in some cases attached using a high-speed connection cable.
Many modern video games require dedicated graphics processors of this kind in order to function, or in order to function at a playable speed and resolution; lower-key, simpler games work decently well with the graphics capabilities included in the core hardware, but the AAA-grade, high-end, visually realistic stuff almost always needs this kind of add-on to work, or to work as intended.
And these sorts of add-ons have been around since personal computers have been around, but they really took off on the consumer market in the 1980s, as PCs started to become more visual—the advent of Windows and the Mac made what was previously a green-screen, number and character-heavy interface a lot more colorful and interactive and intuitive for non-programmer users, and as those visual experiences became more complex, the hardware architecture had to evolve to account for that, and often this meant including graphics cards alongside the more standard components.
A huge variety of companies make these sorts of cards, these days, but the majority of modern graphics cards are designed by one of two companies: AMD or Nvidia.
What I'd like to talk about today is the latter, Nvidia, a company that seems to have found itself in the right place at the right time, with the right investments and infrastructure, to take advantage of a new wave of companies and applications that desperately need what it has to offer.
—
Like most tech companies, Nvidia has been slowly but surely expanding its capabilities and competing with other entities in this space by snapping up other businesses that do things it would like to be able to do.
It bought-out the intellectual assets of 3dfx, a fellow graphics card-maker, in late-2000, grabbed several hardware designers in the early 2000s, and then it went about scooping-up a slew of graphics-related software-makers, to the point where the US Justice Department started to get anxious that Nvidia and its main rival, AMD, might be building monopolies for themselves in this still-burgeoning, but increasingly important to the computing and gaming industry, space.
Nvidia was hit hard by lawsuits related to defects in its products in the late 20-aughts, and it invested heavily in producing mobile-focused systems on a chip—holistic, small form-factor microchips that ostensibly include everything device-makers might need to build smartphones or gaming hardware—and even released its own gaming pseudo-console, the Nvidia shield, in the early 20-teens.
The company continued to expand its reach in the gaming space in the mid-to-late-20-teens, while also expanding into the automobile media center industry—a segment of the auto-industry that was becoming increasingly digitized and connected, removing buttons and switches and opting for touchscreen interfaces—and it also expanded into the broader mobile device market, allowing it to build chips for smartphones and tablets.
What they were starting to realize during this period, though—and this is something they began looking into and investing in, in earnest, back in 2007 or so, through the early 20-teens—is that the same approach they used to build graphics cards, basically lashing a bunch of smaller chip cores together, so they all worked in parallel, which allowed them to do a bunch of different stuff, simultaneously, also allowed them to do other things that require a whole lot of parallel functionality—and that's in contrast to building chips with brute strength, but which aren't necessarily capable of doing a bunch of smaller tasks in parallel to each other.
So in addition to being able to show a bunch of complex, resource-intensive graphics on screen, these parallel-processing chip setups could also allow them to, for instance, do complex math, as is required for physics simulations and heavy-duty engineering projects, they could simulate chemical interactions, like pharmaceutical companies need to do, or—and this turned out to be a big, important use-case—they could run the sorts of massive data centers tech giants like Google and Apple and Microsoft were beginning to build all around the world, to crunch all the data being produced and shuffled here and there for their cloud storage and cloud computing architectures.
In the years since, that latter use-case has far surpassed the revenue Nvidia pulls in from its video game-optimized graphics processing units.
And another use-case for these types of chip architectures, that of running AI systems, looks primed to take the revenue crown from even those cloud computing setups.
Nvidia's most recent quarterly report showed that its revenue tied to its data-center offerings more than doubled over the course of just three months, and it's generally expected that this revenue will more than quadruple, year-over-year, and all of this despite a hardware crunch caused by a run on its highest-end products by tech companies wanting to flesh-out their AI-related, number-crunching setups; it hasn't been able to meet the huge surge in demand that has arisen over the past few years, but it's still making major bank.
Part of why Nvidia's hardware is so in demand for these use-cases is that, back in 2006, it released the Compute Unified Device Architecture, or CUDA, which is a programming language that allows users to write applications for GPUs, graphics processing units, rather than conventional computing setups.
This is what allows folks to treat these gobs of parallel-linked graphics processing units like highly capable computers, and it's what allows them to use gaming-optimized hardware for simulating atoms or managing cloud storage systems or mining Bitcoin.
CUDA now has 250 software libraries, which is huge compared to its competitors, and that allows AI developers—a category of people who are enjoying the majority of major tech investment resources at the moment—to perch their software on hardware that can handle the huge processing overhead necessary for these applications to function.
Other companies in this space are making investments in their software offerings, and the aforementioned AMD, which is launching AI-focused hardware, as well, uses open source software for its tech, which has some benefits over Nvidia's largely proprietary libraries.
Individual companies, too, including Amazon, Microsoft, and Google, are all investing in their own, homegrown, alternative hardware and software, in part so they can be less dependent on companies like Nvidia, which has been charging them an arm-and-a-leg for their high-end products, and which, again, has been suffering from supply shortages because of all this new demand.
So these big tech companies don't want to be reliant on Nvidia for their well-being in this space, but they also want to optimize their chips for their individual use-cases they're throwing tons of money at this problem, hoping to liberate themselves from future shortages and dependency issues, and to maybe even build themselves a moat in the AI space in the future, if they can develop hardware and software for their own use that their competition won't be able to match.
And for context, a single system with eight of Nvidia's newest, high-end GPUs for cloud data center purposes can cost upward of $200,000, which is about 40-times the cost of buying a generic server optimized for the same purposes; so this is not a small amount of money, considering how many of those systems these companies require just to function at a base level, but these companies are still willing to pay those prices, and are in fact scrambling to do so, hoping to get their hands on more of these scarce resources, which further underlines why they're hoping to make their own, viable alternatives to these Nvidia offerings, sooner rather than later.
Despite those pressures to move away to another option, though, Nvidia enjoys a substantial advantage in this market, right now, because of the combination of its powerful hardware and the CUDA language library.
That's allow it to rapidly climb the ranks of highest-value global tech companies, recently becoming the first semiconductor company to hit the $1 trillion valuation mark, bypassing Tesla and Meta and Berkshire Hathaway, among many other companies along the way, and something like 92% of AI models are currently written in PyTorch—a machine learning framework that uses the Torch library, and which is currently optimized for use on Nvidia chips because of its cross-compatibility with CUDA; so this advantage is baked-into the industry for the time-being.
That may change at some point, as the folks behind PyTorch are in the process of evolving it to support other GPU platforms, like those run by AMD and Apple.
But at the moment, Nvidia is the simplest default system to work with for the majority of folks working in AI; so they have a bit of a head start, and that head start was in many ways enabled and funded by their success in the video game industry, and then the few years during which they were heavily funded by the crypto-mining industry, all of which provided them the resources they needed to reinforce that moat and build-out their hardware and software so they were able to become the obvious, default choice for AI purposes, as well.
So Nvidia is absolutely killing it right now, their stock having jumped from about $115 a share a year ago to around $460 a share, today, and they're queued up to continue selling out every product they make as fast as they can make them.
But we're entering a period, over the next year or two, during which that dominance will start to be challenged, more AI code transferable to other software and hardware made by other companies, and more of their customers building their own alternatives; so a lot of what's fueling their current success may start to sputter if they aren't able to build some new competitive advantages in this space, sometime very soon, despite their impressive, high-flying, stock-surging, valuation-ballooning performance over these past few years.
Show Notes
* https://www.wsj.com/articles/SB10001424052702304019404577418243311260010
* https://www.wsj.com/articles/SB121358204084776309
* https://www.wsj.com/tech/ai/how-nvidia-got-hugeand-almost-invincible-da74cae1
* https://www.reuters.com/technology/chatgpt-owner-openai-is-exploring-making-its-own-ai-chips-sources-2023-10-06/
* https://www.theinformation.com/articles/microsoft-to-debut-ai-chip-next-month-that-could-cut-nvidia-gpu-costs
* https://en.wikipedia.org/wiki/PyTorch
* https://innovationorigins.com/en/amd-gears-up-to-challenge-nvidias-ai-supremacy/
* https://techcrunch.com/2023/10/07/how-nvidia-became-a-major-player-in-robotics/
* https://en.wikipedia.org/wiki/Graphics_card
* https://en.wikipedia.org/wiki/Nvidia
Create your
podcast in
minutes
It is Free