h3.post-title { color:#000000; } h3.post-title a, h3.post-title a:visited { color:#7f0000; } h3.post-title a:hover { color:#00ffff; } .post-title a { color:#A52A2A; } .post-title { color:#A52A2A; } -->

Thursday, 28 September 2017

#Moore’s Law is dead? says #NVIDIA's CEO


One of the topics we’ve repeatedly returned to here at ExtremeTech is the state of Moore’s Law and its long-term future. Our conclusions have often been at odds with public statements by semiconductor designers and the foundries that build their hardware. Intel, for example, is still stressing the importance and validity of Moore’s law. Nvidia’s CEO, Jen-Hsun Huang, no longer agrees.
According to Jen-Hsun, CPU scaling over the past few years has significantly increased transistor counts, but performance improvements have been few and far between. GPUs, in contrast, have gotten much faster over the same period of time. Jen-Hsun has occasionally referred to this as “Hyper Moore’s Law,” and he argues because CPUs are less good at parallelism, GPUs will eventually supplant them. DigiTimes reports Nvidia has also teamed up with Huawei, Inspur, and Lenovo to develop a new Tesla 100 HGX-1 accelerator specifically designed for AI applications.
There’s no denying CPU performance improvements have been slow these past six years. Intel has focused more on reducing power consumption and improving performance in low-power envelopes. Its advances in this area have been considerable; modern CPUs draw far less power than Sandy Bridge. As for his comments on Moore’s Law, the situation is more complicated than he makes it look. Computing isn’t divided strictly between CPUs and GPUs with nothing in the middle. Intel’s Knights Landing has up to 72 cores with 288 threads with 36MB of L2 cache. While Xeon Phi’s processors are technically based on an Atom core, Intel has substantially modified them to handle multiple threads and AVX-512 instructions.
Intel isn’t the only company working in this field. Multiple manufacturers are designing their own custom hardware for these workloads, including Fujitsu, Intel-owned Movidius, and Google. These processors aren’t conventional CPUs, but they aren’t GPUs, either. It’s entirely possible the AI and deep learning processors deployed in data centers will be entirely different from those deployed at the edge, in smartphones or (unlikely, but technically possible) PCs.

Is Moore’s Law Dead?

Even the answer to this question is open to debate. Historically, people treat Moore’s Law as a rule that says CPU performance will double every 18-24 months, but that’s not true. Moore’s Law predicts transistor counts doubling, not raw performance. There was another rule that governed performance improvements: Dennard scaling. Dennard scaling stated as transistors became smaller, they would use less power. This would reduce the heat generated by any given transistor and allow them to be packed closer together. Unfortunately, Dennard scaling broke around 2005, which is why CPU clock speeds have barely budged since then.

I’ve argued in the past Moore’s Law isn’t dead so much as its transformed. Rather than focusing strictly on increasing transistor counts and clock speeds, companies now focus on power efficiency and component integration. The explosion of specialized processors for handling AI and deep learning workloads is partly a reaction to the fact that CPUs don’t scale the way they used to.
It’s important to keep in mind the deep learning and AI markets are in their infancy. Companies have floated a huge number of ideas about what AI and deep learning coulddo, but actually deploying these technologies in the field has proven more challenging. But if the market takes off, you’ll eventually see these capabilities being built into CPUs. Once upon a time (aka the mid-1990s), features like graphics and L2 cache resided on the motherboard, not the CPU. Over time, CPUs have integrated L2 cache, L3 cache, memory controllers, integrated graphics, and the southbridges that used to handle storage and I/O control.
Jen-Hsun is absolutely right that adding transistors has done little for CPU performance, and so in that sense, Moore’s Law is dead. If you consider the question in terms of what features and capabilities CPUs have integrated, however, Moore’s Law is very much alive. Nvidia has done a great deal of work in AI and machine learning, but the situation is more complicated then Jen-Hsun implies, and we don’t yet know whose cores and designs are going to win out over others. We’re still in the “Throw mud at the wall and see what sticks” phase. It’s entirely possible the best processor designs for handling these workloads hasn’t even been invented yet.
Source:ExtremeTech

Get it on Google Play
Follow us on:  

    Instagram

No comments:

:a   :b   :c   :d   :e   :f   :g   :h   :i   :j   :k   :l   :m   :n   :o   :p   :q   :r   :s   :t Add emoticons to Blogger +

Post a Comment