Lion of the Blogosphere

Moore’s Law has ended, but no one noticed

I noticed that the 2014 MacBook Air is barely much faster, according to benchmarks, than the 2012 MacBook Air. What happened? Isn’t Moore’s Law supposed to make computers twice as fast every two years?

Well it turns out that Moore’s Law actually came to an end in 2012. This does not mean that there has been absolutely no progress in microprocessors, but the process has slowed down significantly. There have been bigger improvements in battery life, LCD quality, and solid-sate storage has become less expensive, but raw computing power is not increasing according to Moore’s Law!

The problem is already known in the industry. In the past, improvements in computing power have come from shrinking the transistor size. But now that transistors have shrunk down to 28nm, it’s proving difficult to make [chips with smaller transistors that are as inexpensive per transistor as chips with 28nm transistors. (This has been edited to remove inaccuracies in the original post.)]

Once again, this doesn’t mean that microprocessors will remain frozen in time, but it does mean that they are progressing a lot slower than they did in the past.

This has significant implications for the coming age of robots. Futurists imagined that the robots of the future will be powered by exponentially more powerful computer chips, but if the chips of ten years from now are only twice as powerful as current models, that’s a big difference than them being 32 times as powerful as current models (which is what would happen if computing power doubled every two years).

Maybe we should be happy about the end of Moore’s Law, because this means that computers won’t become super-smart, develop evil self-awareness, and enslave humans. The downside is that this could delay the coming of the singularity (or an upside for people who fear that the singularity would mean the loss of our humanity).

* * *

Also note that clock speeds have been stuck at around 3 GHz for the last decade. Up until the early part of this century, it was an easy speed boost for computers to increase the clock speed. When the clock speed increased from 3 MHz to 3 GHz from the 1980s to the early 200s, that was an easy thousand-fold increase in performance!

Written by Lion of the Blogosphere

December 1, 2014 at 5:06 PM

Posted in Technology

66 Responses

Subscribe to comments with RSS.

  1. Could this just be a temporary hiccup in a long-term trend?

    JayMan

    December 1, 2014 at 5:21 PM

  2. From what I understand, a bigger problem is the end of Denard Scaling. This was a law that said as transistors shrink their power usage reduces proportionally.

    Now when they do manage to shrink the chips, the power usage isnt going down.

    alex

    December 1, 2014 at 5:26 PM

    • It was not a law. Just like Moore’s Law was not a law. It was a bit of IR, a bit of PR, and a lot of wishful thinking.

      MyTwoCents

      December 2, 2014 at 3:39 AM

  3. Consumers are demanding cheap, and the same speed chips are much cheaper than 2 years ago.

    They also want lower power consumption for mobile, and that too has been delivered by the industry.

    In terms of clock speed, we’ve actually been stuck at around 3.5ghz for high end desktop processors for more like 5 years. Instead we get more cores.

    Lot

    December 1, 2014 at 5:36 PM

    • I didn’t say that progress has stopped. But if you look, you will see that cheap laptops of today do not have twice the computing power of cheap laptops of two years ago.

      And yes, we reached a frequency limit about 10 years ago. The clock speed of chips has not increased since then, and that was an easy way of making computers more powerful. My first PC had a clock speed of 8 MHz. When you increase the clock speed from 8 Mzh to 4 Ghz, you get a 500-fold increase in computing power without having to add any transistors.

      A quad-core CPU, on the other hand, is only sometimes four times as fast as a single-core CPU, and requires a more sophisticated operating system and compiler to fully take advantage of it.

      Lion of the Blogosphere

      December 1, 2014 at 5:46 PM

      • In one respect the “double the speed every 18-24 months” was only true for a fairly short time in the 80’s and 90’s.

        But so what? Are you doing anything that requires a faster computer? I have a high-end i7 from 2010 with a new flash drive, and my Windows boot time is ~15 seconds, Word and Excel open in less than 2 seconds. iTunes is the program that takes the longest at about 3 seconds.

        The only time I ever have to wait on my PC is doing OCR jobs in Acrobat.

        Peter has it right, cheap flash drives are the biggest improvement in computer speed in a decade from the perspective of an end user. I’m sort of shocked that flash drives don’t have a 95% home PC and 99% laptop market share at this point. Dell, HP, et al are disrespecting their customers by not pushing them hard.

        Lot

        December 1, 2014 at 9:39 PM

  4. Performance on home computers is most often bound by hard drive access speeds. Upgrading to flash disks will be a big performance gain. I assume most laptops in future will have solid state disks by default if they don’t already. From a user perspective that means a “faster” computer even w the same CPU.

    peterike

    December 1, 2014 at 6:27 PM

    • You are not correct. Hard drive and/or SSD storage is not used much for the actual computing, it is mostly for data storage. They have CPU memory cash and RAM for computing.

      MyTwoCents

      December 2, 2014 at 3:35 AM

      • It’s my understanding that the experience of using a desktop computer like Windows 7 or Mac OSX is a lot faster with an SSD drive because a lot of the lagginess comes from reading data from the hard disk drive and copying it into the computer’s RAM.

        In other words, the perceived slowness of PCs is because of the bottleneck of the HDD. I’ve been meaning to buy an SSD for my 4-year-old desktop computer.

        Lion of the Blogosphere

        December 2, 2014 at 9:19 AM

      • By all means get a SSD. Extraordinary speed gains await you.

        Chucks

        December 8, 2014 at 8:00 PM

      • Use an SSD for your OS drive and enterprise quality HDD for storage. I like the WD RE line. As SSD’s drop in price, they might become feasible for storage, though they are affordable now depending on how much storage you need. I’ll never go back to an HDD for the OS drive.

        Tom

        December 11, 2014 at 10:35 PM

  5. Actually it’s Intel’s 14nm chips “broadwell” that were delayed, the 22nm chips were released with “ivy bridge” in 2012, the architectural improvement “haswell” on the 22 nm process was released in 2013. The meaning of sizes has become less well defined: Intel uses (32/22/14nm) the other foundries like (28/20/16nm). The cost/transistor of the 20/22nm node is a closely guarded secret, Intel claims that it is cheaper and have shipped millions of devices. A lot of the foundries have moved to a pay per wafer model rather than pay per computer chip, so the economic risk of poor yields (fraction of working chips on a wafer) is in the hands of the customers. That’s going to be uneconomic.

    That said: exponential scaling always stops, and the end is nigh!

    TJ hooker

    December 1, 2014 at 6:34 PM

  6. more cpus/ problem solved

    grey enlightenment

    December 1, 2014 at 7:29 PM

  7. The market may be limited by technology but it’s driven by supply and demand. I don’t see much demand for faster processors. Do I need a faster processor? No. Why would I? In fact, I’m quite happy with my old machine and ticked off by Microsoft releasing newer software that obsoletes older versions. Besides, nearly all the growth for the last few years has been in smartphones. That’s where the research dollars are going.

    Enough about that. How are your energy stocks doing, Leon?

    destructure

    December 1, 2014 at 9:06 PM

    • Crappy.

      Lion of the Blogosphere

      December 2, 2014 at 12:00 AM

      • It is good time to buy more of them.

        MyTwoCents

        December 2, 2014 at 3:27 AM

      • Crappy.

        I considered buying after your mention. Brains didn’t keep me out. I was too busy for due diligence and stayed in index funds.

        It is good time to buy more of them.

        Maybe. Maybe not. I’ve made the mistake of trying to catch a falling knife.

        destructure

        December 2, 2014 at 5:59 PM

  8. If you go to Intel’s chip timeline, you see clock speeds mostly did not double every 1.5 to 2 years:

    http://www.intel.com/content/www/us/en/history/history-intel-chips-timeline-poster.html

    71 to 72 8x improvement
    72 to 74 2.5x
    74 to 78 2.5x (failed)
    78 to 82 1.2x (failed)
    82 to 85 2.67x (failed)
    85 to 89 1.6x (failed)
    89 to 93 3x (failed)
    93 to 95 3x
    95 to 97 1.5 (failed)
    97 to 99 3x
    99 to 00 2.5x

    Lot

    December 1, 2014 at 9:46 PM

    • Moore’s Law is defined according to the number of transistors in an integrated circuit, not clock speed. Looking at the number of transistors, Moore’s Law has held rather nicely, going from 2,300 transistors in 1971 to 1.4 billion transistors in 2012.

      In fact, computing power has actually doubled considerably faster than every two years, because power is a function of both the number of transistors and clock speed. In 1971, those 2,300 transistors were running at 108 KHz, while in 2012 those 1.4 billion transistors were running at 2.9 GHz.

      This is a huge amount of time to be riding such a steep exponential curve.

      Dan

      December 2, 2014 at 9:13 AM

  9. I think computers are starting to exceed the challenges that most consumers can throw at them, so the demand isn’t quite there. for example – with the rise of virtual machines, you can run the equivalent of 4+ computers on a single machine without noticing any performance issues. if the CPU became 2x, 4x, 8x as powerful – there just isn’t enough stuff we can think of to do with it.

    i also think its unlikely we’ll see human-level AI in the near future because I don’t think people are quite smart enough to understand their own mind, although it might turn out to be easier than we think.

    Mostly, i look at a world getting thrown for a loop over stuff like ferguson, and i think the bright shiny future might just be a hallucination.

    lion of the lionosphere

    December 1, 2014 at 9:54 PM

    • Are you kidding? Any consumer can throw a challenge at a computer that it will not be able to do. Earn billion dollars and bring it to my bank account by the end of this month. Go find a girl on the web, chat her up on line, so that she comes to my house ready for everything next Saturday night. Find prince charming to marry me next week.

      MyTwoCents

      December 3, 2014 at 5:42 PM

  10. one of the best decisions I’ve made was buying an el chepo acer for < $200, cracking it open and putting a gorgeous Samsung SSD in it. Once you go SSD, you'll never go back.

    Dome_Town

    December 1, 2014 at 10:21 PM

    • *ASUS, not acer

      Dome_Town

      December 1, 2014 at 10:21 PM

  11. It’s network speed that’s the premium now.

    ModernReader

    December 1, 2014 at 10:33 PM

  12. Good time to take profits on open $TKMR shorts if anyone took my advice and opened a position during the ebola craze in October. (As I recommended at the time on this blog.)

    Karl

    December 1, 2014 at 10:34 PM

  13. “. . . because this means that computers won’t become super-smart, develop evil self-awareness, and enslave humans.” —————————— Not like evolution and humans, huh?

    Curle

    December 1, 2014 at 11:41 PM

  14. If Moore’s Law is ending then C++, or some other native language, will become dominant again since every CPU cycle needs to be squeezed out. All the Hipster Ruby programmers will be out of a job or become Java/C# programmers.

    Evil Spock

    December 1, 2014 at 11:46 PM

    • C++ is verbose compared to languages like Ruby. It is not going to supplant them in web development, where getting the project done right, with fewer lines and fewer bugs per line, is more important than performance. Neither is Java or C#.

      If anything, we may see more high level languages / Ruby, for two reasons. First, software projects are getting bigger, so bugs are getting harder to find. Second, as processors get more parallel, and heterogeneous, exposing metal-level details to programmers makes less sense.

      Lowe

      December 2, 2014 at 8:44 AM

      • language verbosity is hardly the problem

        the efficiency of the compiled code is the way to address diminishing hw return. So C is the way to B.

        Lion of the Turambar

        December 3, 2014 at 1:02 PM

      • @ Lion of the Turambar

        Yes, that is what Evil Spock said, and I will repeat my reply. That is wrong. High level language use is going to increase. I will restate my reasons more explicitly.

        C is verbose when compared to something like Ruby. I understand they are not for the same application, but the statement is true. More LoC mean more opportunities for bugs. In C this could mean more of the costly bugs that make it into integration testing or products. Only in some applications, e.g. embedded, can factors like efficiency and determinism outweigh debug time and cost.

        I think, in those applications we will see high level languages being used to generate performance C or assembly, but using models that are not optimal, just good enough. The end of increasing processor clock rates, and the eventual end of increasing transistor density, do not make this less likely. They make it more likely, because of increasingly parallel and heterogeneous compute devices.

        It is hard to write a compiler to turn sequential C code into instructions for an SoC, with logic fabric or a GPU on the chip with the processor. What is easier is to get the user to write parallel code from the beginning, as in kernels in CUDA or OpenCL. What languages express this well? Erlang and Haskell come to mind. People have said this for decades, but recent trends make the difference.

        Lowe

        December 3, 2014 at 7:29 PM

    • Yes, this is a good place for a comment on Wirth’s law:
      “software gets slower faster than hardware gets faster”

      http://en.wikipedia.org/wiki/Wirth%27s_law

      Or, restated (tongue in cheek) as Gates’ Law:
      “The speed of software halves every 18 months.”

      Software programmers have been incredibly lazy about efficiency. Software now has many, many layers of abstraction, each layer being a huge efficiency hit. Incentives for efficiency have been nil for decades as computing power flowed like water.

      Optimistically, this means that there remains a lot of room for improvement if and when Moore’s law runs out. Pessimistically, it means the average consumer running Windows and Word has not seen anything like what Moore’s law would suggest.

      It takes me a couple of minutes to reboot my Windows PC with everything that automatically runs on it. By now, according to Moore’s law and the increase in clock speed, I should be able to restart my computer in a millionth of a second 🙂 .

      Dan

      December 2, 2014 at 9:35 AM

  15. Moore’s Law has ended simply because the mobile-computing trend has forced CPU and GPU manufacturers into focusing their efforts on power efficiency. The upside is that progress has been great in this endeavor (3d transistors, Nvidia’s Maxwell, etc.)

    So I don’t know if we can say Moore’s Law is finished. It has rather taken a different path.

    “Maybe we should be happy about the end of Moore’s Law, because this means that computers won’t become super-smart, develop evil self-awareness, and enslave humans. The downside is that this could delay the coming of the singularity (or an upside for people who fear that the singularity would mean the loss of our humanity).”

    No serious AI researcher thinks anymore that human-like intelligence will develop spontaneously in computers through sheer computing power (in fact, spontaneously at all).

    Whole-brain emulation and genetic programming, all of which requiring more targeted human input than raw computing power, are the current approaches.

    Thomas

    December 2, 2014 at 12:28 AM

  16. There are a lot of different ways to go faster. Intel has had an amazing run for several decades, but the world will move on with or without Intel.

    Steve Sailer

    December 2, 2014 at 1:06 AM

  17. Intel is shipping 14nm chips.

    You will definitely see a divergence in computing needs. AI tasks will eat up processing power. In this area GPUs are still scaling great. We will easily get 100x increase in the next 20 years. Not to mention we just have to get general AI working on a super computer then we can build custom designed basics that do the same job for a fraction of the cost.

    The recent progress with neural nets has been astounding. Google has an internal dataset with 15,000 classes, and have classifiers that are better than humans.

    iamthep

    December 2, 2014 at 1:38 AM

  18. OT: Sony got hacked and salaries got leaked. Several in the comments are pointing out the hypocrisy of the leftists in Hollywood.

    http://fusion.net/story/30789/hacked-documents-reveal-a-hollywood-studios-stunning-gender-and-race-gap/

    destructure

    December 2, 2014 at 1:50 AM

    • Again, leftists are not a problem when blacks are not around for them to be their minions. These guys are not the culprits. The de Blasio types and those suffering from White guilt who pushed their equality agenda at the expense of their co-racialists are the ones who are ruining this country.

      JS

      December 2, 2014 at 9:45 AM

      • no worries.

        there’s the whole world.

        there’s the whole of America.

        and then there’s the GOP/libertardian…

        aka crazy hillbilly.

        Robert Gabriel Mugabe

        December 2, 2014 at 10:09 PM

  19. I noticed it when Intel stopped marketing their processors according to their internal clock frequency – several years ago. For the same reason, you can use five year old computer without much reduction in quality of experience. No computing power increase means no reason for application makers to require it. By the way, this is a nail in the coffin of much hyped self driving cars. I mentioned it here before: It may be counter intuitive, but one average human brain is more powerful in total computing/image processing/decision making power than all processors manufactured to date by all processor vendors – I literally mean the total power of all of them.

    MyTwoCents

    December 2, 2014 at 3:25 AM

  20. You confused Moore’s Law.

    What doubles every 2 years, is not performance, but the number of transistors. Moore’s Law is still with as, as long as there are new processes. Broadwell, was delayed, but we don’t know if that was a one time thing or not.

    As someone said before, Denard scaling stopped a while ago. Denard scaling meant that voltage and current, and therefore power, scale down with transistor size.

    Since that doesn’t happen, to get the same power (what you can actually put in a laptop, or cool in a desktop) you need to have the same number of transistors. there are of course tricks around it, but it is not easy.

    Another simple way to increase performance is to have more cores. Unfortunately writing parallel code is hard. Actually using more than a few threads in a program is really hard for most programs. There are of course ways around it, but they don’t really work yet (for example, Speculative multithreading).

    anyway, 14nm is so small it would be soon before it couldn’t scale anymore (and people have been saying that for at least 20 years).

    Yoav

    December 2, 2014 at 7:03 AM

  21. Moore’s Law was about transistor size and density, speed was only derivative. The actual end of speed increases happened several years ago.

    The Singularity is indefinitely postponed, probably cancelled. Evil Spock’s comment is interesting. Are we going back to FORTRAN? COBOL never went away.

    bob sykes

    December 2, 2014 at 7:34 AM

  22. IDK for the 3D interactive things I want to do (holodeck) very powerful CPUs indeed will be required.

    fanofmiley

    December 2, 2014 at 8:15 AM

  23. Regarding the comment “there is no need for faster compute because existing computers do everything people need already”:

    As someone who does computationally intensive algorithm design, I have to say this is very silly. Many applications in applied statistics, mathematics, physics, etc. are bottlenecked by CPU speed. Recurrent neural networks are one example: if CPUs were 6x as fast in 2021, our ability to do artificial intelligence in various domains would be greatly enhanced.

    Take the problem of describing the content of images algorithmically as an example:
    https://gigaom.com/2014/11/18/google-stanford-build-hybrid-neural-networks-that-can-explain-photos/

    Faster computers means larger neural nets, which means applications like this perform much better. When we start thinking about tying semantic interpretation and action of machines together in neural networks, CPU power will be everything, and the slowdown could have major consequences for the future of computationally expensive technology like RNN.

    muelleau

    December 2, 2014 at 8:31 AM

    • GPUs are much more efficient than CPUs at performing graphics computations. Along the same lines, could we develop processors specialized for neural net processing – let’s call them NNPUs (neural net processing units) that improve performance far beyond what CPUs could support, and remove CPU speeds as the bottleneck for neural net processing performance?

      Michael H

      December 2, 2014 at 6:00 PM

      • GPUs are faster than CPUs for some tasks because they take parallelization to an extreme: CPUs execute 4-8 instructions per clock, whereas a modern GPU executes 3000+ instructions per clock, 400-800 times as many. However, GPU is much slower sequentially; GPU operate at ~725 MHz compared to 3000+ MHz for a top end CPU. Indeed, GPU is used for some neural network training tasks, but there are still limits based on the clock speed of the GPU, because some tasks must be done in sequence rather than in parallel.

        Of course, hardware specialized for neural nets could help things (I don’t really know much about this), but this would preclude advances like the above, where a creative team used commodity hardware to design something really innovative without access to special technology.

        muelleau

        December 2, 2014 at 8:54 PM

      • GPUs are said to be good for “mining” bitcoin.

        Lion of the Blogosphere

        December 2, 2014 at 9:40 PM

      • The operations required to mine bitcoins are completely parallelizable in the GPU, so they crush CPU at this task. Basically very little has to be done in sequence to mine bitcoins.

        muelleau

        December 3, 2014 at 4:52 PM

  24. I think your take on this is correct. Having progress come to a stop in our lifetime is depressing. On the other hand, continuing on the path of ever increasing computer power could well mean the end of the human race. So on balance, the end of Moore’s Law is a good thing.

    Ed

    December 2, 2014 at 9:14 AM

  25. O/T – Old fart guido in San Francisco’s Italian district was shot because he called someone with the “N” word, which is of course is a no no coming from a non-black person.

    http://sanfrancisco.cbslocal.com/2014/11/30/man-wounded-in-san-francisco-shooting/

    JS

    December 2, 2014 at 10:04 AM

    • This guy was no guido. He was a 60 year old mentally ill man. Some local people think he’s a harmless eccentric, others see him as a pain in the behind who got what was coming to him. But JS aren’t there enough guidos in Staten Island and Brooklyn for you to write about? Now you need to be searching for guiodos among the San Francisco elderly?

      Maryk

      December 2, 2014 at 1:42 PM

      • “guidos” not “guiodos”! But a “guiodo” might be a good term for a little guido, the linguistically correct “guidoito” being too much of a tongue-twister.

        Maryk

        December 2, 2014 at 1:45 PM

  26. Another O/T involving another guido Pantaleo, SI “Pants on Fire” cop who chokehold eric garner, and said he did nothing wrong.

    Lion, do you think he will be indicted? I say yes, and SI or NYC will not be another Ferguson given the hard evidence that he used a non-kosher police tactic to contain the gentle giant, resulting in his death.

    http://www.silive.com/news/index.ssf/2014/12/pantaleo_officer_at_center_of.html

    MaryK, Italians need anger management sessions!

    JS

    December 2, 2014 at 10:17 AM

    • Maybe so, but no one who is a hothead should ever become a police officer. It takes a certain type of person to be able to handle the authority that cops have. People who know they have a relative with this type of disposition should steer the relative to a more appropriate like of work.

      I sought out an anger management home study course for myself once even though I’ve never been violent because there is a history of both bad tempers and occasional violent episodes in my family on both the Irish and the Italian sides. Ironically, the only one I found was geared toward black men who were or had been in prison.

      But as I’ve said before here, being on LOTB gives me practice in learning to control my anger and channeling it into constructive intellectual exercises (when I defend IAs against all the slander they receive here! ) And on LOTB, there are no shipping and delivery charges!

      Maryk

      December 2, 2014 at 1:27 PM

  27. The limiting factor of performance on mobile devices isn’t Moore’s Law (which is continuing), it’s power consumption and heat generation. You could get a much more powerful processor on a laptop…and then it would either drain your battery in 10 minutes and/or destroy your battery in two weeks because of excessive heat production.

    Mobile devices are close to their maximum possible speed until either battery technology gets better or someone figures out how to increase transistor density without producing so much heat.

    cpk1971

    December 2, 2014 at 12:55 PM

  28. Fiddlesticks

    December 2, 2014 at 1:26 PM

    • people sometime don’t understand that performance != frequency.

      Performance, and especially performance/Watt has grown a lot in recent years even if CPU frequency is in the same place.

      Yoav

      December 3, 2014 at 8:35 AM

  29. O/T – All those White chumps who are protesting in Ferguson, MO seem to forget about the knockout games that have been plaguing the city not too long ago, in which they could have been victims of black-on-White crime.

    http://dailycaller.com/2014/08/15/knockout-game-attack-and-other-violence-in-ferguson/

    JS

    December 2, 2014 at 1:30 PM

  30. The number of transistors on chips is still going up rapidly, but the clock speeds have hit a wall. The increased transistor count is mostly being used to increase the number of processor cores and the size of caches. Increasing the number of processors cores may increase the capacity of the PC, but it does not generally make a single application run faster. Most applications people are running on their home PC are more limited by disk/SSD speed or Internet speed.

    There are a few applications, like 3D rendering, that can fully benefit from the additional cores. For most people a big SSD and more RAM will improve their PC performance.

    MikeCA

    December 2, 2014 at 3:34 PM

  31. I’ve been hearing about molecular computers and quantum computers. If these happen, won’t that result in the continuation of Moore’s law, or just in processor speed because of a difference in kind not quantity of transistors?

    CamelCaseRob

    December 2, 2014 at 5:57 PM

  32. “Maybe we should be happy about the end of Moore’s Law, because this means that computers won’t become super-smart, develop evil self-awareness, and enslave humans. The downside is that this could delay the coming of the singularity (or an upside for people who fear that the singularity would mean the loss of our humanity).”

    Do you follow artificial intelligence or machine learning research at all? Right now, neural networks (which have been rebranded as “deep learning”) are getting the best results on most tasks and seem to have the most promise.

    The main operation in training a neural network is either a matrix multiplication or a convolution of matrix multiplications (the latter is used for image data and could be used for other types of structured data). Right now the best way to do this is by using GPUs.

    So hardware will most benefit AI by:
    -Improving the number of cores available on GPUs.
    -Reducing the price of GPUs (though they’re already pretty cheap)
    -Improving support for quickly transferring data between GPUs.
    -GPUs with lower precision, since neural networks don’t benefit from numerical precision.

    Alex

    December 2, 2014 at 11:28 PM

    • People don’t realize how limited most machine learning algorithms really are.
      Yes, you can feed an algorithm data, and teach it to get the right results.
      But you have to very carefully select the algorithm, and selecet the data, and do checks, and eliminate over learning.

      It is not simply “here is data”. If you do that, you end up with very bad results.

      Yoav

      December 3, 2014 at 8:37 AM

      • The main obstacle to plug-and-chug machine learning algorithms is hyper parameter tuning. For a neural network this is something like selecting the number of neurons, learning rate, momentum, cost function, etc. Someone who has a good intuitive understanding of the data and algorithms can usually get the hyper-parameters right in a few tries. But when processing power is cheap, and datasets large enough, simply trying a random grid search will eventually find the right parameterization.

        Already you see this is pretty much the case with random forests and adaboost, they’re pretty close to a free lunch. Problem with trees, is their performance quickly degrades when the underlying features are even slightly rotated. Neural networks, particularly the deep kinds, are much better at bypassing the need for feature engineering. However neural networks have a higher dimensional hyper-parameter space. Tuning them through grid search or even more sophisticated metaheuristics usually underperforms a human operator making a good choice.

        Doug

        December 4, 2014 at 3:05 AM

    • Alex, I do not see your point. GPUs speed up large matrix multiplications of course, but they are subject to the same limits as CPUs, namely dispersing heat from the density of transistors. You’re right that there will be progress in AI due to learning how to use what we have efficiently (like using GPU for highly parallelized operations), but it’s silly to say that the inability to speed up compute units isn’t a bottleneck to AI.

      muelleau

      December 3, 2014 at 4:56 PM

  33. I know that the consumer market isn’t indicative of the potential, but it’s worth noting that I recently bought an 8 core processor for the same price that I bought a 4 core processor 5 years ago. However, it’s not notable for the reason that you might think. When compared, the AMD 8 core performance is roughly the same, if not very slightly less, than that of my previous awesome AMD 4 core that they no longer make. Theoretically it can handle more parallel tasks, and it uses perhaps 1/6th less electricity, but I see the core difference in processor number as a minor improvement to what is essentially an equivalent processor sold 5 years ago. Most people won’t look at the performance numbers though, and will just compute “double the processors= twice as good”.

    Intel came out with some great novel architecture in the same time frame, but performance isn’t double to what it was 5 years ago. I prefer AMD for other reasons, but the architecture has undergone changes of low comparative significance.

    Tom

    December 11, 2014 at 10:44 PM

  34. It’s pretty amazing how Google can search 6 billion web pages in a second (or at least create the illusion that it did that).

    Lion of the Blogosphere

    December 16, 2014 at 11:46 PM


Comments are closed.