Hacker Newsnew | past | comments | ask | show | jobs | submit | rwxrwxrwx's commentslogin

I always find it easier to produce publication-quality figures using gnuplot (but not with its defaults settings, mind you) than with Matplotlib. Check out http://gnuplotting.org/

Also, it's hard to beat gnuplot's speed refreshing a live scatter plot with many thousands of points using the x11 terminal.


I'm using gnuplot for plotting too (the actual gnuplot application not a library that uses gnuplot as its backend).

And I usually keep computation and plotting separate. Computation produces data files, and a gnuplot script generates plots. This separation of computation and plotting allows updating charts later if needed, collected data can be reused in other plots, and additional data analysis can be performed and charts can be augmented.

So I personally don't see many advantages from integrating chart generation into computational pipeline itself (except for computation monitoring or maybe when user response is needed to direct computation). Because of that, libraries that encourage charts generation from a computed array instead of dumping that data into persisted files feels like an anti-pattern to me.


Completely aeree. I keen computation steps (which create csv files) separate from charting steps. I use make to orchestrate pipelines. I also keep everything under source control, and insert git commit ids into every chart. This ensures that all the analysis and charts can be linked directly to the code used to produce them.


Somewhat agree but sometimes there is need to change/filter the data that goes into making the chart which is only realized after plotting it. Combining data and the figures into one "pipeline" makes it easy to iterate especially with exploratory data analysis. Regardless, this comment made me think about my general workflow which is usually combined. Appreciate this comment.


The command is equivalent to the more verbose:

  plot 'data.dat' using 1:3 with points pointtype 7 linecolor 3 title 'Interesting measurements'


Thank you!


For the sake of anyone reading this thread who isn't in the know: many of these libraries are really written in C/C++ and have Python bindings.


i said ported not implemented; the likelihood that any of those libraries sprout lisp bindings is about as likely as them being rewritten in lisp. so it's the same thing and the point is clear: i don't care about some zany runtime feature, i care about the ecosystem.


Stop moving the goalposts: your answer to a commenter who stated that Common Lisp was faster than Python (a fact) was a list of packages, many of which are (1) not even written in Python and (2) some of them actually do have Common Lisp bindings.


Too real.


You might also be interested in https://news.ycombinator.com/item?id=32363295


Communications of the ACM, September 1991 (Vol. 34, No. 9)


The Apache HTTP Server and the GNU Scientific Library come to mind.


One possible way could be extracting the product type marking codes / marking information from a large collection of data sheets.


Unfortunately many datasheets don't include short codes for the devices they describe. And manufacturers often hide them behind search submissions. Hope you can figure out who made the part you're looking at AND that the totally-still-in-business company has a lookup tool!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: