Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are some things really neat in the switf example that they compare to python, like the literal range similar to ruby's, the fact that "+" is a function like in lisp, although I'm really not fan of the CFAbsoluteTimeGetCurrent().

But the python example doesn't make me trust the rest of the article. It is clearly a swift example, translated verbatim to python.

Idiomatic Python would be this:

    import time
    for it in range(15):           
        start = time.time()        
        total = sum((it,) * 3000)
        end = time.time()     
        print(end - start, total)  
Which is shorter, and way faster.

Now of course, Python is slower than swift (although a numpy version would not be, but I get it's not relevant in general purpose machine learning). But misrepresenting a language is not a good way to make a point.



Hi, author here, thanks for taking the time to read the article!

The objective of the demo was not to see which language could sum up a bunch of numbers the fastest. You could keep optimizing that until you are left with just `print(<the resulting number>)`. The objective was to have a simple example of looping over an array a bunch of times. The only reason I ended up summing the numbers in the array and printing them was so that LLVM wouldn't optimize it away and be unfair towards python. I actually wrote it first in Python tbh.


Make sense, thanks.


The `CFAbsoluteTimeGetCurrent()` is just a call to a system library. There are libraries which implement APIs like:

    let clock = Clock.system
    let now = clock.thisInstant()
there just isn't a nice core library interface yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: