Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Gimp-ML – Machine Learning Python plugins for GIMP (github.com/kritiksoman)
118 points by Hard_Space on May 9, 2020 | hide | past | favorite | 14 comments


I think we are on the first steps of a very interesting road related to using DL for image processing:

- Automatically cutting a person or an object https://www.remove.bg/

- Turning 2d photos in 3d perspective videos https://shihmengli.github.io/3D-Photo-Inpainting/

- Artistic style transfer for videos https://www.youtube.com/watch?v=Khuj4ASldmU

- Super-resolution for old games https://www.resetera.com/threads/ai-neural-networks-being-us...

Etc, etc.


There are lots of great image processing algorithms also outside of DL. Search for "Siggraph" on youtube. If only these algorithms were easily accessible as plugins in GIMP ...

https://en.wikipedia.org/wiki/SIGGRAPH


Unfortunately writing a paper about implementing your new fancy algorithm (TM) in GIMP is hardly ever successful, so many of those die in some git repo.


I assume (hope) it won’t be long before erasing objects within a scene and adding new ones (e.g. clouds) are widespread, too.


Highly compressed Youtube videos don't really seem like a good medium by which to demonstrate an image scaling algorithm..


75% of the length of some of the videos is just waiting for the plugin to process the image... seems like before/after pics would be a better way to demo the plugin effects.


Yep, and it doesn't show pixel 1:1 or with 800% or 1600% zoom against NN, linear, bicubic which sort of give you an idea of how good the SR effect is.

Sure I can download it on my PC and test it, but honestly I'd rather pick and old paper and implement my own version that runs faster and I can programatically apply it over sets of images if I want to.

Not bragging, I've been meaning to do this "when I have spare time" and since lockdown I've been busier than ever /facepalm

Since my use case calls for running in CPU (for the moment at least) I was looking on a few architectures that are more CPU-friendly (smaller net, not as many convolutions) than GANs.


I think the plugins are very interesting. The videos demoing the features are pretty annoying. I can turn down the music, but most of the video is sat there waiting for the effect to finish. That should be skipped. More importantly, that gray out effect to highlight the cursor is way too heavy. It made me lose the context of where it is in GIMP and was very confusing. I would hope most people are like me and only need a some subtle highlighting to track the cursor rather than hitting us over the head with it and losing the surrounding context.


Agreed. Maybe we could come up with some ML thingie to solve this? :O)


It's great that the author has posted running code, so others can try it out and learn from it. That is great.

To run this code on debian, I had to install python2, python2 pip, python2 virtualenv, and the gimp-python plugin. The init code for the plugin dumps around 1GB of cache files into your ~/.cache directory, mostly in ~/.cache/pip/http , which then get built into a virtualenv in the git tree for the project.

Unfortunately there is no obvious license, so nobody can build on it directly, although some of the underlying codes have permissive licenses.

I tried out the super-resolution plugin on several test images; it didn't give good results. Motion deblurring had problems also. Both worked poorly on dealing with input compression artifacts, and on dealing with blurry low-resolution inputs. Both performed reasonably well on relatively clear high-resolution inputs (and where I already knew that blind deconvolution could easily recover the blur kernel), with no obvious artifacts in the output.


By watching the videos I'm sure I'm not grasping the magnitude of what this plugin achieves. I'll try to do some tests myself. If it is as good as advertised, then congratulations, very impressive!


The videos should show before/after for a longer time and without the fancy transition (which makes it harder to see the difference.)

Other than that, wow, cool!


I really wanted to do this with rawtherapee! I'm glad someone else beat me too it, I can't wait to try this out.


these days people have lots of insecurity about their looks and voice, we must make software which let people hide their real voice and manipulate their own face to something they consider more attractive.

Not only this, all people should work from home. That will make the body language dominance thing disappear and the people who have good brain will get chance to come forward (the real meritocracy)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: