Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was expecting a bigger gap in the performance between Go and Python.

Perhaps we've come a pretty long ways in the interpreted languages department? Or, that the cost of interpreting is vastly less than speed losses due to IO or memory access?

"Interpreted is always slower than compiled lol right guise?" was a tired refrain ten years ago, much less today.



CPython's bytecode interpreter has not really improved at all in 10 years relative to JITs and compilers.


That's a problem with CPython, not interpreted languages as a whole.

Maybe they'll make it faster in Python 3000, right?


PyPy is probably your best bet if you want a faster python. The official CPython will never have speed as a primary focus due to a number of self imposed limitations like:

Since it's a reference implementation it should be easy read and learn from.

They're not really willing to accept patches that speed up some things if they at the same time slow down other things (this has been the main problem with all the GIL removal patches that have shown up over the years).

They're not willing to accept patches that break any existing code or libraries.

PyPy on the other hand have non of these limitations and happily break all three, making it great for a subset of python code out there.


They make it pretty damn fast. But non-JIT runtimes have strict limitations on how fast they can go, unless you design your language specifically to optimize for interpreter speed (like lua).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: