As a practical matter, adjusting some parameter like texture in addition to hue is helpful. If you adjust lightness or saturation it'll serve the dual purpose of making your charts understandable when printed in black-and-white.
For those doing web work, Chrome has a helpful way to simulate some of the most common vision deficiencies in its developer tools. From the menu, open the More tools → Rendering panel, then down at the bottom there is a setting called “Emulate vision deficiencies”. It can do blurred vision as well as several types of colour blindness.
Adding to this, the Chrome (83+) and Firefox (81+) developer tools both do a reasonable job at the simulation, using the method of Machado et al. (2009) [1].
Unfortunately, the linked to simulator, like many of the online simulators, does a very poor job. When simulating protanopia, reds should appear darker, due the lack of L cones. However, many simulators incorrectly display red as bright green instead.
I've also written a color picker that uses the Machado et al. method to enforce CAM02-UCS minimum perceptual distance for normal vision and color vision deficiency [2].
I've also written a color picker that uses the Machado et al. method to enforce CAM02-UCS minimum perceptual distance for normal vision and color vision deficiency [https://colorcyclepicker.mpetroff.net/].
That seems like a very useful tool for planning new colour schemes. I wish there were more discussion and tools based on true human perception of colours, not just numerical representations that aren’t necessarily calibrated to how human vision works.
Yes, much better ways of representing and working with colours are known. Sadly, support for them is missing in most of the software we use, including Adobe Illustrator and Photoshop, the Affinity suite, Sketch, Figma and all major browsers. The best we get out of the box is usually HSB/HSL.
Of course, you can make the effort to construct a colour palette using a better model and then convert the colours. However, as soon as you start deviating from those carefully chosen colours — to build a gradient, or to apply filters or transparency, for example — you’re back to relying on the software to do the maths, and if its internal colour model is weak, the results will reflect that.
Photoshop does support LAB but all of the advanced color science (and much better UX) is found in tools for movie production (Resolve & friends) and not in photo editors, which are largely shit.
I linked to the supplementary information [1] in my previous comment, but here's the link for the paper [2]. The method is implemented by the Colorspacious library [3] for Python, and the source for my color picker [4] contains both JavaScript and WebGL implementations.
> When simulating protanopia, reds should appear darker, due the lack of L cones. However, many simulators incorrectly display red as bright green instead.
For those who can't "see" it:
For people who have protananomaly, bright red (#ff0000) may look like `#cc0000` -- but it's clearly different from bright green (#00ff00) or gray (#cccccc).
Things get confusing with colors like pale brown (#997755) which may look something between fern green (#557755) and dim gray (#667755).
(1) It can be a fair chunk of your audience -- 1/10 men are colorblind in some locales.
(2) There are many flavors of colorblindness. Tools like https://www.color-blindness.com/coblis-color-blindness-simul... are helpful to make sure your palette works for most of them.
As a practical matter, adjusting some parameter like texture in addition to hue is helpful. If you adjust lightness or saturation it'll serve the dual purpose of making your charts understandable when printed in black-and-white.